The pool configuration file is the file controlling the ‘server’ (proxy) for PHP. The ‘listen’ line in this file is telling where proxy requests should be accepted.
‘user’ and ‘group’ tells under which user account the process should be run. ‘listen.owner’, ‘listen.group’ and ‘listen.mode’ can be set to limit access to the proxy (by other users/sites).
Sample PHP FPM pool configuration
For each version of PHP that should be available, create a pool configuration file (in ‘/etc/php/<version>/fpm/pool.d/’) like:
(change the “listen =” line, so it matches the PHP version you wish to use, then change to listen to the same socket in the virtualhost configuration or in the override segment in .htaccess)
In the virtualhost configuration, that same listening socket must be used (the owner of the httpd process must have rights to talk to the proxy, so it could be set up as a per-user or per-group setting depending on how it was set up in the pool configuration).
Sample virtualhost, allows override of PHP version and access to /fpm-status
When running both Apache httpd and PHP as a specific user, the files on the web site only need to have user read/write(when needed) access if they are owned by the user running the processes.
To make a permanent change to the umask used by PHP, add ‘UMask=0077’ to the ‘Service’ section of each PHP FPM service:
systemctl edit php8.2-fpm.service
Add:
[Service]
UMask=0002
Then reload systemctl daemon and restart the fpm service:
Download all files attached to a item page at archive.org
Navigate to the item page you want to download all the files from.
Download the XML filelist (named as the item, get the file ending with “_files.xml”).
Parse the filelist for the files (quick and ugly):
grep "file name=" someitem_files.xml | sed s:\<file\ name=:\<a\ href=:g | sed s:\>:\>file\<\/a\>:g
This will keep the lines containing “file name=” and create a output only containing (relative, as in the file list) html links to each file.
Redirect the output to a file (I assume you know how), then download with wget:
For more advanced downloading, I have created a set of script (not yet released) that allow downloads of a complete collection (of other item pages) or download of everything uploaded by a specific user. My scripts will also create ‘md5sum -c’ compatible lists from the _files.xml files, execute the checking and optionally delete corrupt files for re-downloading.
This article documents my methods for preserving floppy disks. There is probably a better way that I haven’t thought about yet.
The downside of the methods described herein, is that the Kryoflux project is more or less abandoned, and the methods here are not as easy to implement with the widely available and supported Greaseweasle equivalent.
General guidelines
Always use a clean, known good floppy drive for preservation attempts.
For the first read attempt, use the GUI for simplicity and generating the logs without having to bother with the command line parameters.
If there’s a label on the disk, use it to identify the disk that is read so you later easily can find it to doing re-reads of failed tracks.
If the format is known, select that in the output format selection drop down.
Software used
I use the latest/last windows version of the dtc (Kryoflux) software that can be found on the Kryoflux download page. As (recently) the Linux version was also updated to the last one (3.00), it should be equally usable for my methods.
Within windows, I the use microsoft Ubuntu shell for all operations except the ‘dtc’ command which is run in a microsoft shell. If you find suitable alternatives to ‘grep’ (search in files), ‘split’ (split file into parts) and ‘cat’ (join file parts alphabetically into a single file), I see no reason that this couldn’t be done using only windows shell.
Guessing the disk format
Guessing the format and saving data as Kryoflux preservation raw files
Good reads: Find disks which was read 100% ok on the first read of all tracks
grep -L bad *log|sort
Find disks without any non-recoverable read errors
grep -L failed *log|sort
Bad / incomplete reads: Find disks which have at least one non-recoverable read error
grep -l failed *log|sort
Find disks which have at least one track below 80 indicated as unformatted
grep -l "^[^8]*" *log|sort
Re-reading bad tracks
If possible, use another, newly cleaned, disk drive to try to re-read the tracks that previously failed. The same method can also be used to combine two mastered disks with errors on different tracks if re-reading from the disk with the failed tracks still isn’t possible.
Use the track format verification options if you’re sure about the format (will do no damage, but adds extra info to the logs). -i2 (CT RAW) is selected as a verification format by the GUI, so I keep that and add Amiga (-i5) in the example.
Read one track at a time – this seem to increase the chance of correct results since the read head has to move directly to that location instead of just ”dragging” itself over the damaged floppy while it is rotating.
Find which tracks needs to be re-read
grep failed DiskID.log
Re-read tracks with errors save as Kryoflux preservation format and try to verify as possible format(s)
The parameters -s and -e sets the start and end track. Use the same value of both even if the tracks are located next to each other (see above).
If for example tracks 21, 66, 67, 68, 69 and 70 failed when reading the disk for the first time:
Take a note of which (if not all) of the tracks were recoverable using this read method. Even if a track fails to be read, a data file will be stored.
Methods of combining multiple reads into one
Using the raw files (Kryoflux preservation format) from both reads
If you have the raw files from the first read, copy them to another place and then copy the raw files from the new read into that folder (replace those from the first read).
Create the floppy disk image using this mix of source files from either the GUI or the command line.
Using the new raw files and an incomplete disk image from the first read
Some knowledge about the disk format is needed for this method. The most important parameter is the number of bytes per track (in the case of the Amiga it is 512*11*2, which is 11264).
Use the DTC GUI or command line to create the assumed floppy image type (Amiga in this case) from the raw data files. This image will be inomplete, and will contain only the re-read tracks.
Split the old and the new image into track-sized parts:
The files will (by default) be named xaa, xab etc, but -d changes this to x00, x01 … Also, the prefix (x) could be changed, but that depends on the implementation of the used split command. Safest is to split into two subdirectories and keep the original names like:
(current directory holds copies of both disk image files to combine)
mkdir old
cd old
split ../old.adf -b11264 -d
cd ..
mkdir new
cd new
split ../new.adf -b11264 -d
Copy the new (those that was correctly read) tracks into the ”old” folder:
cp x21 x66 x67 x68 x69 x70 ../old
Join (now mixed) content in ”old” as a new disk image file:
cd ../old
cat x* >>../combined.adf
That’s it. This guide has not been tested recently, but was just jotted down while preserving some badly damaged floppies about a year ago. My post in the Kryoflux forum: Method for reading problematic disks (?)
Yesterday I noticed that the LEDs were blinking amber on one of my LS220D boxes. My initial thought was that a disk had failed (it’s just a backup of my backup). Checked with the “NAS Navigator” application, and it stated that it was unable to mount the data array (md10) (I have not logged the full error message here, as I continued the attempts to solve the situation).
dmesg output
I logged in as root (see other posts) to check what had gone wrong.
‘dmesg’ revealed that a disk had been lost during smartctl (multiple repeats of the below content):
As I was able to mount the partition, I did a file system check after unmounting it:
[root@BUFFALO-4 ~]# xfs_repair /dev/md10
Phase 1 - find and verify superblock...
Not enough RAM available for repair to enable prefetching.
This will be _slow_.
You need at least 1227MB RAM to run with prefetching enabled.
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
...
- agno = 30
- agno = 31
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
...
- agno = 30
- agno = 31
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
doubling cache size to 1024
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done
[root@BUFFALO-4 ~]# mount /dev/md10 /mnt/array1
[root@BUFFALO-4 ~]# ls /mnt/array1/
backup/ buffalo_fix.sh* share/ spool/
Another reboot, then checking to find out that md10 was still not mounted.
The error in NAS Navigator is: “E14:RAID array 1 could not be mounted. (2022/07/14 12:36:18)”
Time to check ‘dmesg’ again:
md/raid1:md2: active with 1 out of 2 mirrors
md2: detected capacity change from 0 to 1023410176
md: md1 stopped.
md: bind
md/raid1:md1: active with 1 out of 2 mirrors
md1: detected capacity change from 0 to 5114888192
md: md0 stopped.
md: bind
md/raid1:md0: active with 1 out of 2 mirrors
md0: detected capacity change from 0 to 1023868928
md0: unknown partition table
kjournald starting. Commit interval 5 seconds
EXT3-fs (md0): using internal journal
EXT3-fs (md0): mounted filesystem with writeback data mode
md1: unknown partition table
kjournald starting. Commit interval 5 seconds
EXT3-fs (md1): using internal journal
EXT3-fs (md1): mounted filesystem with writeback data mode
kjournald starting. Commit interval 5 seconds
EXT3-fs (md1): using internal journal
EXT3-fs (md1): mounted filesystem with writeback data mode
md2: unknown partition table
Adding 999420k swap on /dev/md2. Priority:-1 extents:1 across:999420k
kjournald starting. Commit interval 5 seconds
EXT3-fs (md0): using internal journal
EXT3-fs (md0): mounted filesystem with writeback data mode
The above shows that md0, md1 and md2 went up, but are missing its mirror partition (this from /dev/sda that disappeared).
Further down in dmesg output
md: md10 stopped.
md: bind
md: bind
md/raid0:md10: md_size is 15565748224 sectors.
md: RAID0 configuration for md10 - 1 zone
md: zone0=[sda6/sdb6]
zone-offset= 0KB, device-offset= 0KB, size=7782874112KB
md10: detected capacity change from 0 to 7969663090688
md10: unknown partition table
XFS (md10): Mounting Filesystem
XFS (md10): Ending clean mount
XFS (md10): Quotacheck needed: Please wait.
XFS (md10): Quotacheck: Done.
udevd[3963]: starting version 174
md: cannot remove active disk sda6 from md10 ...
[root@BUFFALO-4 ~]# mount /dev/md10 /mnt/array1/
[root@BUFFALO-4 ~]# ls -l /mnt/array1/
total 4
drwxrwxrwx 3 root root 21 Dec 14 2019 backup/
-rwx------ 1 root root 571 Oct 14 2018 buffalo_fix.sh*
drwxrwxrwx 3 root root 91 Sep 16 2019 share/
drwxr-xr-x 2 root root 6 Oct 21 2016 spool/
What the h… “cannot remove active disk sda6 from md10”
Checking md raid status
[root@BUFFALO-4 ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md10 : active raid0 sda6[0] sdb6[1]
7782874112 blocks super 1.2 512k chunks
md0 : active raid1 sdb1[1]
999872 blocks [2/1] [_U]
md1 : active raid1 sdb2[1]
4995008 blocks super 1.2 [2/1] [_U]
md2 : active raid1 sdb5[1]
999424 blocks super 1.2 [2/1] [_U]
unused devices:
[root@BUFFALO-4 ~]# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Fri Oct 21 15:58:46 2016
Raid Level : raid0
Array Size : 7782874112 (7422.33 GiB 7969.66 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Fri Oct 21 15:58:46 2016
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : LS220D896:10
UUID : 5ed0c596:60b32df6:9ac4cd3a:59c3ddbc
Events : 0
Number Major Minor RaidDevice State
0 8 6 0 active sync /dev/sda6
1 8 22 1 active sync /dev/sdb6
So here, md10 is fully working and md0, md1 and md2 are missing their second device. Simple to correct, just adding them back:
Some time later, sync was finished, and I rebooted again. Finally, after this reboot /dev/md10 is automatically mounted to /mnt/array1 again.
Problem solved 🙂
smartctl notes
The values of attributes 5, 197 and 198 should be zero for a healthy drive, so one disk in the NAS is actually failing, but the cause of the hiccup (disconnect) was a core dump by smatctl weekly scan.
Because of my current internet provider which refuse to give me a public IP (providing me with only CGNAT), I can no longer access my stuff at home as I wish. This post describes my workaround for the problem.
External access server
Used to connect the tunnel from the inside to, and to connect to from the outside. I use a Oracle Free tier WM for this.
On the access server, I have set up a (normal) user account for the tunnel, then created a private key using Puttygen and added the public key to the .ssh/authorized_keys file for the user. /etc/ssh/sshd_config needs to be modified by adding the line
GatewayPorts clientspecified
Do not forget to check that you are able to login to the server using the user set up for this purpose (set private key login in Putty, or use ssh -i with that key)
Firewall on the access server
If the access server is behind other firewalls, you need to open the port(s) you want to connect to. For the Oracle VMs, this is done via the web UI:
Virtual Cloud Networks, click the VCN name, click the subnet name, click the security list (“Default security list” unless you have done it the recommended way to create separate security lists). Then (at last), “Add Ingress Rules”:
Source CIDR: 0.0.0.0/0 (unless you want to limit, but just for testing this will allow everyone to connect)
Source port: blank
Destination port: the port you want to connect to. An unprivileged user (the SSH user account) can only use ports 1025 and up.
The same port(s) also need to be opened on the access server itself.
I prefer using firewalld for this:
# firewall-cmd –zone=public –permanent –add-port={your-port-number-here}/tcp
# firewall-cmd –reload
GUI for plink (from Putty) to keep a reverse SSH tunnel open
https://myentunnel.informer.com/download/
Download the file (myentunnel_setup-3.6.1.exe), install it and then replace the included plink.exe with the current version included by the Putty installation.
Putty location: C:\Program Files\PuTTY
myEntunnel location: C:\Program Files\MyEnTunnel
Files used for the default profile: localports.txt (blank), remoteports.txt, keyfile.ppk (used for connecting to the server)
Ref: https://superuser.com/questions/235395/automatic-ssh-tunneling-from-windows
MyEntunnel configuration
Settings tab:
The obvious section, server (name or IP) and username, I suppose you know what to fill in there 🙂
No passphrase is needed, since we’re connecting with a private key only (this is also required by Oracle VMs)
As I forgot to note down the default settings, I just provide a snapshot of the settings I have in use (most are default values):
Tunnels tab:
Only the remote side needs to be filled in, syntax as the description below the input fields (per tunnel to create):
[listen-IP:]listen-port:host:port
Where listen-IP is the LOCAL ip of the access server (a private IP address if behind NAT as with the Oracle WMs, which are usually by default on the 10.x.x.x network) listen-port is the port opened in the inside and outside firewalls (the port on which you will access the inside stuff on) host is the inside host, can be localhost for the computer running MyEntunnel, or any other host reachable from that computer. port is the port on the inside host
A complete line could look like this:
10.0.0.3:18180:192.168.101.180:80
(will access port 80 on the inside host IP 192.168.101.180, when going through the access server at port 18180)
We have identified an issue affecting a subset of customers who have become unable to access their Oracle Cloud Infrastructure resources.
Customer Impact: Some customers with Free Tier accounts, using Ephemeral or Reserved Public IPs will be unable to access their resources due to the unintentional reclamation of the IPs associated with their Virtual Machines.
While we have taken steps to ensure no further impact occurs, any affected Public IPs will need to be re-established by reassigning a new Public IP through the Oracle Cloud Infrastructure Console, REST API, SDK CLI or other tools, as described in the following documentation:
If a preferred public IP is configured, the public IP assignment may still be reassigned subject to its availability.
Solution:
Assign a new IPv4 address to your virtual machines:
1. Log in to Oracle Cloud (you have the URL somewhere in an email)
2. Find your machines (the listing), menu: compute / instances (https://cloud.oracle.com/compute/instances)
2b. You might have to select the compartment where your VMs are located, even if you only have the ‘root’ compartment.
3. In the machine list, click the machine name.
4. Scoll down to the “Resources” section (at the left edge), click “Attached VNICs”.
5. In the VNIC list, click the name (Primary VNIC).
6. Scroll down to “Resources”, and click “IPv4 Addresses”.
7. At the right side of the window, click the three dots (which are hidden beneath the “Support” icon), then click “Edit” from the menu that pops up.
8. Click the “Ephemeral public IP” option, fill in an optional name, then click “Update”
Now, the remaining steps are updating DNS for stuff pointing to the servers (if you have any), and updating connections (SSH) to reflect the new IP.
Quizzer was written by me mostly in between 1999 and 2000. I wrote this system entirely in Perl (CGI script on a Solaris host) because there was no good enough applications out there. As this was a private project, I did no attempts to sell it (even if I had it prepared for that, see the extensive documentation).
You can find Quizzer up and running on https://quizzer.webit.nu/
Documentation updated to some point in time: https://quizzer.webit.nu/docs/
Most of the question databases (plain text following some rules) were rewritten from existing resources, but the questions shown in the video is from what I wrote myself from reading the Solaris 8 System Admin manuals.
Preparing the new server for CGI execution
Besides my standard setup for a Linux server for Apache/PHP/MySQL, I also switched over to using fcgid and php-fpm to be able to use PHP 8.1 as default and use a per-directory or per-vhost configuration to switch over to PHP 7.4 when needed. Enable CGI-execution module for Apache
a2enmod cgid
Enable CGI-execution for the virtual host
Add these lines to the virtual host configuration. The below additions also adjusts what is considered to be an index page and adds configuration to prevent downloading of files with some specific extensions (this should be done in the server main configuration).
DirectoryIndex index.cgi index.php index.html index.htm
<Directory /var/www/quizzer.webit.nu/html>
AllowOverride All
Options +ExecCGI
</Directory>
AddHandler cgi-script .cgi
<FilesMatch "\.(?:inc|pl|py|rb)$">
Order allow,deny
Deny from all
</FilesMatch>
Check that CGI-script works
Use this simple CGI script to check that it works (test.cgi):
Also, the script has to be executable, then restart apache to reload configuration:
chmod 700 test.cgi
service apache2 restart
Updating the code for a new Perl version
(Screens from my actual code) How to make Perl include files in the current directory
At some point in time, Perl got a security fix that no longer allows the current directory (the script directory) to be considered when including other code files. This broke my script badly.
There are several methods around this problem, and I ended up solving it my own way: I wrote a two-line wrapper for ‘/usr/bin/perl’, and saved it as ‘/usr/local/bin/perl’ (which was my command line in all scripts):
#!/bin/sh
PERL_USE_UNSAFE_INC=1 /usr/bin/perl $1
This method required no modification of any of my source files to get them execute correctly and find their included files.
defined not allowed on array anymore
For some reason, it is no longer possible to use ‘defined @array’ to check if the variable has been set. So I had to replace every occurrence of the ‘defined @’ with just ‘@’, which made my code much more unreadable:
Before:
After:
According to Perldoc:
After these modifications everything worked fine, except some small configuration mistakes of the quiz system itself (handling compressed question databases and pointing to some incorrect temporary locations).
Test it, use it if you wish
It took me some to find out how to create new users for storing personal test history. I had made this as simple as you just have to type in anything unique (not already registered) that looks like an email address, and a password you want to use.
The system sets up a demo account for you if that user name is not in use.
“Personal” history for the non-logged in demo user looks like this:
(upper part)
(graphical overview)
(detailed report)
The “Find a hole” challenge is off
As this is old revived code, and no reports of holes in the code were reported at the time it was online (1999-2002), I had to make a hole 🙂
This is valid as long as I make no new databases for the system (then if that happens, I decide what to do at that point).
Get full access to all UNIX questions
All m$ questions are available in demo mode, so no fully activated account needed for these. I recommend you create your own personal ‘demo’ account for the m$ questions to be able to view history.
So: simply use your external IP-address as the user name, and the password “FullAcccess2022” to give yourself a fully enabled user 🙂
I dug into the sqlite databases used by the JottaCloud client (and branded ones like Elgiganten) and found something that can be useful for other diggers…
This documentation is for the windows version of the client. The path to the database files and the path formats within the databases will differ for the client for other OSes.
11-Jan-2023: Updated example queries in the comments. Added ‘Find duplicates’.
Preparing
This method works for finding the location on the windows version:
Open the client interface, go to settings, then under the “General” tab, you will find a button that opens the log file location:
A window with the location ‘C:\Users\{myuser}\AppData\Roaming\Jotta\JottaWorld\log’ will be opened. Go to the parent directory, and there you will find the ‘db’ directory.
Keep this location open and QUIT the Jotta client (from the taskbar or any other effective method)
Copy the ‘db’ (or its parent ‘JottaWorld’) folder to a work- (or backup) location. NEVER do anything without having a backup copy of the ‘db’ folder, or even the whole ‘JottaWorld’ (parent) folder in case something goes wrong.
Examining the databases
From here, I will be examining each of the databases (.db files) and go through what I’ve found out. I will use the sqlite3 client supplied by microsoft-invented Ubuntu, the alternative is (on windows) to use a native sqlite3 client the same way, or just copy the ‘JottaWorld’ or ‘db’ directory to a computer with Linux (or any other real operating system) installed.
Basic sqlite3 usage
To open the database in sqlite3, simply use the sqlite3 command followed by the database name:
sqlite3 c.db
To show all tables in a database:
.tables
To show the table layout:
.schema {table name}
Select and update statements works basically as in other SQL clients.
c.db (outside the ‘db’ folder)
An empty database with a single table ‘c’, defined as:
CREATE TABLE c (id INTEGER PRIMARY KEY ASC AUTOINCREMENT,type integer, time integer, size integer, attempts integer, checksum string, path string, known );
The use of it is for me unknown (as the table is empty in my db).
This database was last changed almost two years before I stopped the Jotta client.
The use of it is for me unknown (as the table is empty in my db).
This database was last changed a week before I stopped the Jotta client.
dlsq.db
Database for the Jotta Sync folder. This folder is by default synced in full on all computers set up against the same Jotta account. There is no selective sync or OneDrive-like on-demand sync in Jotta, the only option is to completely disable the sync folder on the “Sync” tab in the settings. The sync folder location can be changed there too.
Mostly self-explanatory, except for the two fields I’m unable to explain 🙂 jwc_shareid is in the form of jwc_uuid given above, jwc_owner is probably some secret string about my user (at Jotta) that I’m not supposed to share. It’s an 24 character alphanumeric string.
jobs.db
Contains only one table ‘jobs’ defined as
CREATE TABLE jobs (id integer primary key autoincrement, status integer, uri, name, path, databasepath, files integer, bytes integer );
The use of it is for me unknown (as the table is empty in my db).
This database file was last changed almost a year before I stopped the client.
mm.db
Backup folders. This is the only table I have made manual changes to (I made the listed folder name in the GUI more obvious on some entries). Never change anything without having a backup, and never change anything while the client is running.
Tables:
backup_schedule
The backup schedule (Schedule tab in settings)
backup_schedule_copy
Backup copy of the backup schedule
excludes
Files and folders excluded from backup
excludes_copy
Internal backup copy of the excludes table
mountpoints
All backup folders set in the client
backup_schedule and backup_schedule_copy
The backup schedule in settings seems to be a very simplified one. By modifying the database it looks like they prepared to allow for different backup time settings every day (I don’t know if it works).
The table is defined as:
All self-explanatory except “mountpoint”, which is set to “-1” when I create a schedule. If the schedule is set to any of the multi-day variants (“weekends”,”weekdays”,”everyday”) there will be multiple entries in the database, one for each day:
sqlite> select * from backup_schedule;
1|-1|Monday|2|0|Monday|7|0
2|-1|Sunday|2|0|Sunday|7|0
3|-1|Saturday|2|0|Saturday|7|0
4|-1|Wednesday|2|0|Wednesday|7|0
5|-1|Tuesday|2|0|Tuesday|7|0
6|-1|Friday|2|0|Friday|7|0
7|-1|Thursday|2|0|Thursday|7|0
sqlite> select * from backup_schedule;
1|-1|Sunday|2|0|Sunday|7|0
2|-1|Saturday|2|0|Saturday|7|0
sqlite>
My guess about the ‘mountpoint’ column (which is set to “-1” by the schedule settings in the client) is that it refers to the ‘mountpoints’ table, so theoretically it should be possible to create separate schedules for every one of the mountpoints by directly entering them into the database…
The ‘backup_schedule_copy’ table contains the schedule before making changes through the client.
excludes and excludes_copy
All files and folders that are excluded by the backup. This also includes the system and hidden files and folders that are not backed up. From the client settings, it is possible to include hidden files and folders.
The table is defined as:
Not much to explain here. ‘mountpoint’ is set to ‘-1’, and I find no possible use for it to match an entry in the ‘mountpoints’ table. ‘pattern’ allows for simple pattern matching (*) for the full local path of a file or folder to exclude from backup.
mountpoints
This table contains all the backup folders defined in the client.
The table is defined as:
Status, can be any of the following:
Scanning
ScheduleWaiting
AllGood
Uploading
QueuedForScan
jwc_location
‘Local’ or ‘Remote’
jwc_type
Zero on all my entries
jwc_ip
127.0.0.1 for local paths, empty for remote
jwc_suspended
“Suspended” for paused backups, blank otherwise
I find the content of jwc_status to more often be incorrect than correct, while writing this it is scanning one of my network drives, but in the database it says “Uploading”. Many entries are “Up to date” according to the client, but listed as different things in the db.
reque_c and reque_u
Two more sqlite3 database files that are without their extension (.db)
reque_c contains a table with queued uploads (scanned files, on queue for checksumming), which has the same definition as reque_u. As these files are queued for checksumming, the “checksum” field in the blob is an empty string. Content of the extraData fields in the blob is written to sm.db in (before) this stage.
reque_u contains a table with queued uploads (checksummed, waiting for upload slot):
CREATE TABLE uploads (id INTEGER PRIMARY KEY, tag INTEGER, blob BLOB );
id: just the entry id, duplicated (last value) in the blob tag: the oddly named field for the mountpoint id (in mm.db), repeated in the blob blob contains JSON array of file information:
Most content of the blob is self-explanatory if you have read until here.
checksum: the md5 checksum of the file cre, mod: timestamp of creation and last modification extraData:id is the new file id and extraData:parent is the folder containing the file (folders table in mm.db). This information was written to the database in the scanning phase (reque_c).
sm.db
Contains information on all backed up files Tables:
files
Information for all backed up files
folders
Information for all backed up folders
mountpoint_status
(empty)
folders
The table is defined as:
CREATE TABLE folders (id integer primary key autoincrement, path text UNIQUE, state integer, parent integer, mountpoint integer, checksum varchar(20));
path
Full local path to the folder
state
Contains a value of 1,2,5,6 or 7 in my database, have no idea of what it represents
parent
Id of parent folder (in this table)
mountpoint
mountpoint id in mm.db
checksum
md5 checksum on something (a folder cannot be checksummed)
files
The table is defined as:
CREATE TABLE files (id integer primary key autoincrement, path text UNIQUE, parent integer, size integer, modified integer, created integer, checksum varchar(16), state integer, mountpoint integer);
path
The full path of the backed up file
parent
the id of the containing folder (in folders table)
size
file size
modified
timestamp of modification
created
timestamp of creation
checksum
md5 checksum of file
state
Contains a value of 6 or 7 in my database, have no idea of what it represents
mountpoint
mountpoint id in mm.db
So why all this trouble analyzing the database ?
I wanted an easy way of finding my files by its md5 checksum, that was one of the reasons. Another thing (not solved yet) is that I want to find out the way of recreating the share link for a specific file or folder within a public shared folder on my Jotta account (this without going through the web interface, I mean, it’s already shared inside an accessible folder).
Odd things noticed are that there are md5 checksums for folders, and three different ones in the sync folder (the jwt_files and jwt_folders tables in the dlsq.db), but for the individual files there is only the files’ real md5 checksum.
Anyway… that investigation will continue some other day…
Comment below if you find the way to calculate the share-id, or find it useful in any other way 🙂
This guide is a continuation / restart of piStorm – getting started with Emu68, which was written as a starters’ guide for just getting Emu68 up and running on the piStorm.
Since I wrote that guide, Michal has added similar instructions as I present here to the resources at GitHub.
As described before, you should look for the latest file named something like “Emu68-pistorm-20211220-62363e.zip”. The content of this file will in the last step be copied to the root of the Fat32 partition of the SD-card.
Preparing the SD-card
Emu68 presents partitions with the 0x76 ID as hard drives to the Amiga side through the “brcm-sdhc.device”, so we need to create at least two partitions on the SD-card (which normally comes prepared as a single Fat32 partition).
I do this using entirely using the command line program “diskpart” on my Windows computer.
Find the “Diskpart” application, either somewhere in the windows menu, or using the search function and search for “cmd”. Right click the icon and select “Run as administrator”. You will be getting a warning that you are going to do something dangerous, accept that one 🙂
The dangerous part
Insert your SD-card, then run “diskpart” from the command prompt. List all recognized disks by the diskpart command “list disk”. If you find the obvious disk (in my case it’s “disk 3”) that must be your SD-card, use the “select disk” command to select it as the current one. List the partitions on the disk with “list part” to ensure you are working on the right disk.
In any case of uncertainty, exit “diskpart” by using the “exit” command, then remove the SD-card and run “diskpart” again and list the disks. The missing one is your SD-card.
Use the “clean” command to remove all the partitions on the SD-card (as seen as the last command in the image above, and as the first in the image below). Create the Fat32 partition. Only a few MB is needed, but I usually allocate 200MB for the Emu68 and kickstart files.
As shown below, I create a 500MB partition, a 2GB partition and one for the rest of the space on the card (in this case 26GB), and with that same command I set the partition ID to 0x76 (which is needed to be specified, so the Amiga can find the emulated disks).
Exit “diskpart” with the command “exit”, and then exit the command line shell, also with “exit”. Commands above (create partitions):
cre part pri size=200
cre part pri size=500 id=76
cre part pri size=2000 id=76
cre part pri id=76
Give the Fat32 partition a drive letter
sel part 1
assign
Format the Fat32 partition
This can be done in many ways, but as we already are inside ‘diskpart’, I present the easiest way first 🙂
format fs=FAT32 label=Emu68 quick
Another way is either accept to format the partition in the request that windows pops up directly after the “assign” command in ‘diskpart’, or as the same, as described below:
Go to “This PC” in explorer (the file explorer, not the ancient web browser), right click the small partition on the SD-card and select “Format”.
Copy the files from the latest nightly Emu68 to the root of the SD-card. This can be done using WinRAR (rarlab.com) without extracting the files. Just select the latest nightly archive, right click and choose “Extract files…”, then type in the drive letter for the 200MB partition:
Copy your choice of Kickstart ROM (usually a kickstart for the A1200) file to the root of the Fat32 partition and update the config.txt accordingly.
Now the SD-card is ready to be booted on with the piStorm. Boot from some floppy with the AmigaOS hard drive installation utilities, then change the HDToolBox tooltype SCSI_DEVICE_NAME to brcm-sdhc.device then start HDToolbox and set up the partitions on the disks with IDs other than 0 (zero) (which represents the whole disk and should not be used within AmigaOS).
This procedure is well described in the current documentation by Michal (and on a lot of other places), so head over there and read his guide.
If you decide to install AmigaOS 3.2, you do not need to use the PFS3aio filesystem. FFS works fine with large disks and partitions in this release.
Download the latest nightly build of early alpha Emu68 for piStorm from the resource above. You should be looking for a file named “Emu68-pistorm-20211106-9c3186.zip” or similar (note that the files are sorted in forward alphabetically order, and the latest are a bit down on the list).
Extract the files to the root of a fat32-formatted SD-card.
Copy your Amiga kickstart file to the root of that card.
Edit the configuration file (config.txt) and set the kickstart file name (last line in the included config file):
...
# PiStorm variant - use initramfs to map selected rom
initramfs kick.rom
Insert the SD-card in your pi3a+ mounted to the piStorm. Power on the Amiga and enjoy the extremely short startup time 🙂
The setup is now ready for boot from floppy (although many games does not work yet, at least not booting from floppy, but Workbench floppies do, as the Install3.2 disk to install AmigaOS onto a hard drive).
Installing AmigaOS on a hard drive (partition on SD-card)
For hard drive setup, there will be more steps involved, such as partitioning the sd-card into at least one boot partition and one or more Amiga partitions.