I revived “Quizzer”

Quizzer was written by me mostly in between 1999 and 2000. I wrote this system entirely in Perl (CGI script on a Solaris host) because there was no good enough applications out there. As this was a private project, I did no attempts to sell it (even if I had it prepared for that, see the extensive documentation).
You can find Quizzer up and running on https://quizzer.webit.nu/
Documentation updated to some point in time: https://quizzer.webit.nu/docs/

Most of the question databases (plain text following some rules) were rewritten from existing resources, but the questions shown in the video is from what I wrote myself from reading the Solaris 8 System Admin manuals.

Preparing the new server for CGI execution

Besides my standard setup for a Linux server for Apache/PHP/MySQL, I also switched over to using fcgid and php-fpm to be able to use PHP 8.1 as default and use a per-directory or per-vhost configuration to switch over to PHP 7.4 when needed.
Enable CGI-execution module for Apache

a2enmod cgid

Enable CGI-execution for the virtual host
Add these lines to the virtual host configuration. The below additions also adjusts what is considered to be an index page and adds configuration to prevent downloading of files with some specific extensions (this should be done in the server main configuration).

  DirectoryIndex index.cgi index.php index.html index.htm
  <Directory /var/www/quizzer.webit.nu/html>
    AllowOverride All
    Options +ExecCGI
  </Directory>
  AddHandler cgi-script .cgi
  <FilesMatch "\.(?:inc|pl|py|rb)$">
    Order allow,deny
    Deny from all
  </FilesMatch> 

Check that CGI-script works
Use this simple CGI script to check that it works (test.cgi):

#!/usr/bin/perl
print "Content-type: text/html\n\n";
print "Hello, World.";

Also, the script has to be executable, then restart apache to reload configuration:

chmod 700 test.cgi
service apache2 restart

Updating the code for a new Perl version

(Screens from my actual code)
How to make Perl include files in the current directory

At some point in time, Perl got a security fix that no longer allows the current directory (the script directory) to be considered when including other code files. This broke my script badly.

There are several methods around this problem, and I ended up solving it my own way: I wrote a two-line wrapper for ‘/usr/bin/perl’, and saved it as ‘/usr/local/bin/perl’ (which was my command line in all scripts):

#!/bin/sh
PERL_USE_UNSAFE_INC=1 /usr/bin/perl $1

This method required no modification of any of my source files to get them execute correctly and find their included files.

defined not allowed on array anymore
For some reason, it is no longer possible to use ‘defined @array’ to check if the variable has been set. So I had to replace every occurrence of the ‘defined @’ with just ‘@’, which made my code much more unreadable:

Before:

After:

According to Perldoc:

After these modifications everything worked fine, except some small configuration mistakes of the quiz system itself (handling compressed question databases and pointing to some incorrect temporary locations).

Test it, use it if you wish

It took me some to find out how to create new users for storing personal test history. I had made this as simple as you just have to type in anything unique (not already registered) that looks like an email address, and a password you want to use.
The system sets up a demo account for you if that user name is not in use.
“Personal” history for the non-logged in demo user looks like this:
(upper part)

(graphical overview)

(detailed report)

The “Find a hole” challenge is off

As this is old revived code, and no reports of holes in the code were reported at the time it was online (1999-2002), I had to make a hole πŸ™‚
This is valid as long as I make no new databases for the system (then if that happens, I decide what to do at that point).

Get full access to all UNIX questions
All m$ questions are available in demo mode, so no fully activated account needed for these. I recommend you create your own personal ‘demo’ account for the m$ questions to be able to view history.

So: simply use your external IP-address as the user name, and the password “FullAcccess2022” to give yourself a fully enabled user πŸ™‚

JottaCloud secrets

I dug into the sqlite databases used by the JottaCloud client (and branded ones like Elgiganten) and found something that can be useful for other diggers…

This documentation is for the windows version of the client. The path to the database files and the path formats within the databases will differ for the client for other OSes.

11-Jan-2023: Updated example queries in the comments. Added ‘Find duplicates’.

Preparing

This method works for finding the location on the windows version:
Open the client interface, go to settings, then under the “General” tab, you will find a button that opens the log file location:

A window with the location ‘C:\Users\{myuser}\AppData\Roaming\Jotta\JottaWorld\log’ will be opened. Go to the parent directory, and there you will find the ‘db’ directory.

Keep this location open and QUIT the Jotta client (from the taskbar or any other effective method)

Copy the ‘db’ (or its parent ‘JottaWorld’) folder to a work- (or backup) location. NEVER do anything without having a backup copy of the ‘db’ folder, or even the whole ‘JottaWorld’ (parent) folder in case something goes wrong.

Examining the databases

From here, I will be examining each of the databases (.db files) and go through what I’ve found out. I will use the sqlite3 client supplied by microsoft-invented Ubuntu, the alternative is (on windows) to use a native sqlite3 client the same way, or just copy the ‘JottaWorld’ or ‘db’ directory to a computer with Linux (or any other real operating system) installed.

Basic sqlite3 usage

To open the database in sqlite3, simply use the sqlite3 command followed by the database name:

sqlite3 c.db

To show all tables in a database:

.tables

To show the table layout:

.schema {table name}

Select and update statements works basically as in other SQL clients.

c.db (outside the ‘db’ folder)

An empty database with a single table ‘c’, defined as:

CREATE TABLE c (id INTEGER PRIMARY KEY ASC AUTOINCREMENT,type integer, time integer, size integer, attempts integer, checksum string, path string, known );

The use of it is for me unknown (as the table is empty in my db).
This database was last changed almost two years before I stopped the Jotta client.

dl.db

Contains only one table ‘requests’ defined as

CREATE TABLE requests (id integer primary key autoincrement, callerid integer, localpath, remotepath, created integer, modified integer, revision integer, size integer, checksum varchar(32), queue integer, state integer, attempts integer, flags integer );

The use of it is for me unknown (as the table is empty in my db).
This database was last changed a week before I stopped the Jotta client.

dlsq.db

Database for the Jotta Sync folder. This folder is by default synced in full on all computers set up against the same Jotta account. There is no selective sync or OneDrive-like on-demand sync in Jotta, the only option is to completely disable the sync folder on the “Sync” tab in the settings. The sync folder location can be changed there too.

Tables:

jwt_blockingevents

(empty)

jwt_files

Information about all files

jwt_folders

Information about all folders

jwt_queuedfiles

Files checksummed and queued for transfer

jwt_shares

Shared files and folders within the sync folder

jwt_folders
The table is defined as:

CREATE TABLE jwt_folders (jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT, jwc_stateid, jwc_remotepath, jwc_remotehash, jwc_localpath, jwc_localhash, jwc_basepath, jwc_relativepath, jwc_folderhash , jwc_state, jwc_parent, jwc_newpath);
jwc_id

Folder id, used in the jwc_parent column and in jwc_files

jwc_stateid

empty on the data I have

jwc_remotepath

Path to the folder at Jotta, starting with ‘/{Jotta user name}/Jotta/Sync/’

jwc_remotehash

md5sum of the folder (?) a folder cannot be hashed

jwc_localpath

The full local path to the folder

jwc_localhash

md5sum of the folder (?) a folder cannot be hashed

jwc_basepath

empty on the data I have

jwc_relativepath

Path relative to the Sync folder location, empty on many of the entries

jwc_folderhash

empty on the data I have

jwc_state

State as cleartext ‘Updated’ if all files are synced

jwc_parent

id (jwc_id) of parent folder

jwc_newpath

empty on the data I have

jwt_files
The table is defined as:

CREATE TABLE jwt_files (jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT, jwc_remotepath, jwc_remotesize INTEGER, jwc_remotehash, jwc_localpath, jwc_localsize INTEGER, jwc_localhash, jwc_relativepath, jwc_created INTEGER, jwc_modified INTEGER, jwc_updated INTEGER, jwc_status, jwc_checksum, jwc_state, jwc_uuid, jwc_revision , jwc_folderid, jwc_newpath);
jwc_id

File id

jwc_remotepath

Path to the file at Jotta, starting with ‘/{Jotta user name}/Jotta/Sync/’

jwc_remotesize

File size on the remote end (should match localsize)

jwc_remotehash

md5sum of something at the remote end

jwc_localpath

The full local path to the file

jwc_localsize

File size on the local side (should match remotesize)

jwc_localhash

md5sum of something at the local side

jwc_relativepath

Path relative to the remote location, empty on many of the entries

jwc_created

timestamp of file creation

jwc_modified

timestamp of file modification

jwc_updated

zero on all my files

jwc_status

empty on the data I have

jwc_checksum

file md5 checksum

jwc_state

either ‘UpdatedFileState’ or ‘MovingFileState’ (used on renamed files, see ‘jwc_newpath’)

jwc_uuid

don’t know, ‘{00000000-0000-0000-0000-000000000000}’ on most files

jwc_revision

0, 1 or 11 on all my files

jwc_folderid

id (jwc_id from jwt_folders) of containing folder

jwc_newpath

New local name of a file renamed because of an upload error

jwt_queuedfiles
The table is defined as:

CREATE TABLE jwt_queuedfiles (jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT, jwc_remotepath, jwc_remotesize INTEGER, jwc_localpath, jwc_localsize INTEGER, jwc_relativepath, jwc_created INTEGER, jwc_modified INTEGER, jwc_status, jwc_checksum, jwc_revision INTEGER, jwc_queueid, jwc_type, jwc_hash , jwc_folderid);

It was empty in my current copy of the database, but it should be more or less like jwt_files (used only temporarily).

jwt_shares
The table is defined as:

CREATE TABLE jwt_shares (jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT, jwc_shareid, jwc_localpath, jwc_remotepath, jwc_owner, jwc_members );

Mostly self-explanatory, except for the two fields I’m unable to explain πŸ™‚
jwc_shareid is in the form of jwc_uuid given above, jwc_owner is probably some secret string about my user (at Jotta) that I’m not supposed to share. It’s an 24 character alphanumeric string.

jobs.db

Contains only one table ‘jobs’ defined as

CREATE TABLE jobs (id integer primary key autoincrement, status integer, uri, name, path, databasepath, files integer, bytes integer );

The use of it is for me unknown (as the table is empty in my db).
This database file was last changed almost a year before I stopped the client.

mm.db

Backup folders. This is the only table I have made manual changes to (I made the listed folder name in the GUI more obvious on some entries). Never change anything without having a backup, and never change anything while the client is running.

Tables:

backup_schedule

The backup schedule (Schedule tab in settings)

backup_schedule_copy

Backup copy of the backup schedule

excludes

Files and folders excluded from backup

excludes_copy

Internal backup copy of the excludes table

mountpoints

All backup folders set in the client

backup_schedule and backup_schedule_copy
The backup schedule in settings seems to be a very simplified one. By modifying the database it looks like they prepared to allow for different backup time settings every day (I don’t know if it works).
The table is defined as:

CREATE TABLE backup_schedule(id INTEGER PRIMARY KEY, mountpoint INTEGER, start_day TEXT, start_hour INTEGER, start_minute INTEGER, end_day TEXT, end_hour INTEGER, end_minute INTEGER);

All self-explanatory except “mountpoint”, which is set to “-1” when I create a schedule. If the schedule is set to any of the multi-day variants (“weekends”,”weekdays”,”everyday”) there will be multiple entries in the database, one for each day:

sqlite> select * from backup_schedule;
1|-1|Monday|2|0|Monday|7|0
2|-1|Sunday|2|0|Sunday|7|0
3|-1|Saturday|2|0|Saturday|7|0
4|-1|Wednesday|2|0|Wednesday|7|0
5|-1|Tuesday|2|0|Tuesday|7|0
6|-1|Friday|2|0|Friday|7|0
7|-1|Thursday|2|0|Thursday|7|0
sqlite> select * from backup_schedule;
1|-1|Sunday|2|0|Sunday|7|0
2|-1|Saturday|2|0|Saturday|7|0
sqlite>

My guess about the ‘mountpoint’ column (which is set to “-1” by the schedule settings in the client) is that it refers to the ‘mountpoints’ table, so theoretically it should be possible to create separate schedules for every one of the mountpoints by directly entering them into the database…
The ‘backup_schedule_copy’ table contains the schedule before making changes through the client.

excludes and excludes_copy
All files and folders that are excluded by the backup. This also includes the system and hidden files and folders that are not backed up. From the client settings, it is possible to include hidden files and folders.
The table is defined as:

CREATE TABLE excludes(id INTEGER PRIMARY KEY, mountpoint INTEGER, pattern VARCHAR(1024));

Not much to explain here. ‘mountpoint’ is set to ‘-1’, and I find no possible use for it to match an entry in the ‘mountpoints’ table. ‘pattern’ allows for simple pattern matching (*) for the full local path of a file or folder to exclude from backup.

mountpoints
This table contains all the backup folders defined in the client.
The table is defined as:

CREATE TABLE mountpoints(jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT,jwc_name,jwc_path,jwc_device,jwc_description,jwc_status,jwc_location,jwc_type,jwc_ip,jwc_suspended );
jwc_name

Name displayed in the client

jwc_path

The path for the folder to backup

jwc_device

Computer name (for the Jotta side ?)

jwc_description

Computer name

jwc_status

Status, can be any of the following:
Scanning
ScheduleWaiting
AllGood
Uploading
QueuedForScan

jwc_location

‘Local’ or ‘Remote’

jwc_type

Zero on all my entries

jwc_ip

127.0.0.1 for local paths, empty for remote

jwc_suspended

“Suspended” for paused backups, blank otherwise

I find the content of jwc_status to more often be incorrect than correct, while writing this it is scanning one of my network drives, but in the database it says “Uploading”. Many entries are “Up to date” according to the client, but listed as different things in the db.

reque_c and reque_u

Two more sqlite3 database files that are without their extension (.db)

reque_c contains a table with queued uploads (scanned files, on queue for checksumming), which has the same definition as reque_u. As these files are queued for checksumming, the “checksum” field in the blob is an empty string. Content of the extraData fields in the blob is written to sm.db in (before) this stage.

reque_u contains a table with queued uploads (checksummed, waiting for upload slot):

CREATE TABLE uploads (id INTEGER PRIMARY KEY, tag INTEGER, blob BLOB );

id: just the entry id, duplicated (last value) in the blob
tag: the oddly named field for the mountpoint id (in mm.db), repeated in the blob
blob contains JSON array of file information:

{
        "checksum": "6cd9bca0e441280fb72ff5cf6f7991b3",
        "cre": 1657809534,
        "extraData": {
            "id": 9730953,
            "parent": 12740
        },
        "localpath": "C:/Users/peo/Downloads/Toro Reelmaster 216 - Operators Manual - MODEL NO. 03410TEβ€”70001 & UP.pdf",
        "mod": 1657809535,
        "remotepath": "/jfs/LAPTOP-3/Downloads/Toro Reelmaster 216 - Operators Manual - MODEL NO. 03410TEβ€”70001 & UP.pdf",
        "size": 1972185,
        "tag": 9,
        "timeout": 0
    },
    "id": 9907
}

Most content of the blob is self-explanatory if you have read until here.
checksum: the md5 checksum of the file
cre, mod: timestamp of creation and last modification
extraData:id is the new file id and extraData:parent is the folder containing the file (folders table in mm.db). This information was written to the database in the scanning phase (reque_c).

sm.db

Contains information on all backed up files
Tables:

files

Information for all backed up files

folders

Information for all backed up folders

mountpoint_status

(empty)

folders
The table is defined as:

CREATE TABLE folders (id integer primary key autoincrement, path text UNIQUE, state integer, parent integer, mountpoint integer, checksum varchar(20));
path

Full local path to the folder

state

Contains a value of 1,2,5,6 or 7 in my database, have no idea of what it represents

parent

Id of parent folder (in this table)

mountpoint

mountpoint id in mm.db

checksum

md5 checksum on something (a folder cannot be checksummed)

files
The table is defined as:

CREATE TABLE files (id integer primary key autoincrement, path text UNIQUE, parent integer, size integer, modified integer, created integer, checksum varchar(16), state integer, mountpoint integer);
path

The full path of the backed up file

parent

the id of the containing folder (in folders table)

size

file size

modified

timestamp of modification

created

timestamp of creation

checksum

md5 checksum of file

state

Contains a value of 6 or 7 in my database, have no idea of what it represents

mountpoint

mountpoint id in mm.db

So why all this trouble analyzing the database ?

I wanted an easy way of finding my files by its md5 checksum, that was one of the reasons. Another thing (not solved yet) is that I want to find out the way of recreating the share link for a specific file or folder within a public shared folder on my Jotta account (this without going through the web interface, I mean, it’s already shared inside an accessible folder).

Odd things noticed are that there are md5 checksums for folders, and three different ones in the sync folder (the jwt_files and jwt_folders tables in the dlsq.db), but for the individual files there is only the files’ real md5 checksum.

Anyway… that investigation will continue some other day…

Comment below if you find the way to calculate the share-id, or find it useful in any other way πŸ™‚

piStorm – Preparing the SD-card for Emu68

This guide is a continuation / rewrite of piStorm – getting started with Emu68, which was written as a starters’ guide for just getting Emu68 up and running on the piStorm.

Since I wrote that guide, Michal has added similar instructions as I present here to the resources at GitHub.

Resources

Emu68 for piStorm Nightly build
Emu68 Docs section at GitHub

Getting the files you need

As described before, you should look for the latest file named something like “Emu68-pistorm-20211220-62363e.zip”. The content of this file will in the last step be copied to the root of the Fat32 partition of the SD-card.

Preparing the SD-card

Emu68 presents partitions with the 0x76 ID as hard drives to the Amiga side through the “brcm-sdhc.device”, so we need to create at least two partitions on the SD-card (which normally comes prepared as a single Fat32 partition).
I do this using entirely using the command line program “diskpart” on my Windows computer.

Find the “Diskpart” application, either somewhere in the windows menu, or using the search function and search for “cmd”. Right click the icon and select “Run as administrator”. You will be getting a warning that you are going to do something dangerous, accept that one πŸ™‚

The dangerous part

Insert your SD-card, then run “diskpart” from the command prompt. List all recognized disks by the diskpart command “list disk”. If you find the obvious disk (in my case it’s “disk 3”) that must be your SD-card, use the “select disk” command to select it as the current one. List the partitions on the disk with “list part” to ensure you are working on the right disk.

In any case of uncertainty, exit “diskpart” by using the “exit” command, then remove the SD-card and run “diskpart” again and list the disks. The missing one is your SD-card.

Use the “clean” command to remove all the partitions on the SD-card (as seen as the last command in the image above, and as the first in the image below). Create the Fat32 partition. Only a few MB is needed, but I usually allocate 200MB for the Emu68 and kickstart files.
As shown below, I create a 500MB partition, a 2GB partition and one for the rest of the space on the card (in this case 26GB), and with that same command I set the partition ID to 0x76 (which is needed to be specified, so the Amiga can find the emulated disks).

Exit “diskpart” with the command “exit”, and then exit the command line shell, also with “exit”.
Commands above (create partitions):

cre part pri size=200
cre part pri size=500 id=76
cre part pri size=2000 id=76
cre part pri id=76

Give the Fat32 partition a drive letter

sel part 1
assign

Format the Fat32 partition
This can be done in many ways, but as we already are inside ‘diskpart’, I present the easiest way first πŸ™‚

format fs=FAT32 label=Emu68 quick

Another way is either accept to format the partition in the request that windows pops up directly after the “assign” command in ‘diskpart’, or as the same, as described below:
Go to “This PC” in explorer (the file explorer, not the ancient web browser), right click the small partition on the SD-card and select “Format”.

Copy the files from the latest nightly Emu68 to the root of the SD-card. This can be done using WinRAR (rarlab.com) without extracting the files. Just select the latest nightly archive, right click and choose “Extract files…”, then type in the drive letter for the 200MB partition:

Copy your choice of Kickstart ROM (usually a kickstart for the A1200) file to the root of the Fat32 partition and update the config.txt accordingly.

Now the SD-card is ready to be booted on with the piStorm. Boot from some floppy with the AmigaOS hard drive installation utilities, then change the HDToolBox tooltype SCSI_DEVICE_NAME to brcm-sdhc.device then start HDToolbox and set up the partitions on the disks with IDs other than 0 (zero) (which represents the whole disk and should not be used within AmigaOS).

This procedure is well described in the current documentation by Michal (and on a lot of other places), so head over there and read his guide.
If you decide to install AmigaOS 3.2, you do not need to use the PFS3aio filesystem. FFS works fine with large disks and partitions in this release.

piStorm – getting started with Emu68

In this guide, which will be my shortest ever, I explain how to get started with the Emu68 barebone JIT emulator for the piStorm.

Resources

Emu68 for piStorm Nightly build

The shortest instructions ever πŸ™‚

Download the latest nightly build of early alpha Emu68 for piStorm from the resource above. You should be looking for a file named “Emu68-pistorm-20211106-9c3186.zip” or similar (note that the files are sorted in forward alphabetically order, and the latest are a bit down on the list).

Extract the files to the root of a fat32-formatted SD-card.

Copy your Amiga kickstart file to the root of that card.

Edit the configuration file (config.txt) and set the kickstart file name (last line in the included config file):

...
# PiStorm variant - use initramfs to map selected rom
initramfs kick.rom

Insert the SD-card in your pi3a+ mounted to the piStorm. Power on the Amiga and enjoy the extremely short startup time πŸ™‚

The setup is now ready for boot from floppy (although many games does not work yet, at least not booting from floppy, but Workbench floppies do, as the Install3.2 disk to install AmigaOS onto a hard drive).

Installing AmigaOS on a hard drive (partition on SD-card)

For hard drive setup, there will be more steps involved, such as partitioning the sd-card into at least one boot partition and one or more Amiga partitions.

piStorm – Preparing the SD-card for Emu68

OpenWrt on Raspberry Pi 4 (and CM4)

Installation and configuration notes

Stuff used
Raspberry Pi 4 Compute Module (CM4, 4GB)
Waveshare Dual Gigabit Ethernet Base Board (CM4-DUAL-ETH-BOX-A)

Resources
OpenWrt Wiki
OpenWrt Firmware for Raspberry Pi 4

CM4-DUAL-ETH-BASE Wiki
(Very thin documentation on the CM4 baseboard used, nothing about the USB3 network port, but some info on the RTC, fan control and display and camera interfaces)

Internet of Things – a techie’s viewpoint
(I used mainly the beginning of chapter 36 for the first good enough solution I found on how to switch the interfaces so that eth0 will be used for WAN and eth1 for LAN)

Installation

Get the latest (stable) version of OpenWrt (I use “Factory (EXT4)”), write it to a MicroSD-card the usual way, insert into slot on CM4 board and boot up.

Note: Before booting the SD-card, you might want to resize the Linux partition and file system on it. Do this with another Linux-based system:
Insert the SD-card into a reader/card slot and check end of ‘dmesg’ output which device was assigned the card:

root@DS1517:~# dmesg |tail
[13376.702534] sd 10:0:0:1: [sdr] 61849600 512-byte logical blocks: (31.6 GB/29.4 GiB)
[13376.714483]  sdr: sdr1 sdr2

In this case (on my Synology NAS), the card readers’ slot was assigned ‘sdr’.

Resize the partition with ‘parted’:

root@DS1517:~# parted /dev/sdr
GNU Parted 3.2
Using /dev/sdr
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
p
Model: TS-RDF8 SD Transcend (scsi)
Disk /dev/sdr: 31.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      4194kB  71.3MB  67.1MB  primary               boot, lba
 2      75.5MB   173MB   104MB  primary

(parted) resizepart 2 -1
...
(parted) q

Resize the file system with ‘resize2fs /dev/sdr2’

The default is to use eth0 for LAN, which I didn’t like (with the possibility that the other USB3-based interface might be less stable, and is using kernel drivers for the incorrect model).
To fix this I used the guide mentioned above, the beginning of chapter 36, with some modifications to fit my network.

(section 36.4 in IoT guide)
The later distributions of OpenWrt starts up as logged in as root on the console, which makes it easier to do the initial adjustments to the network settings. As the guide mentions, if your home network is on the 192.168.1.0/24 (or a larger section, like /16) subnet, you can access the shell by SSH (root without password) to do the modifications.
Note: If you already have something (possibly a router) at 192.168.1.1, you have to connect a computer directly to the CM4 router’s eth0 interface and make the configuration changes that way. Once eth0 is set to DHCP, you can connect it to your LAN (which will give it a IP-address you have to find out). In this case your LAN is actually the WAN for the CM4 router.

Change the lan section of /etc/config/network to:

config interface 'lan'
    option ifname 'eth0'
    option proto 'dhcp'

While at it, if you not plan to use the Wifi on the Pi, disable it in /boot/config.txt:

dtoverlay=disable-wifi

Reboot the Pi, and you will get IP by DHCP (handed out by your old router). Either find that IP in the old router or just do a “ifconfig” command on the console.

Installing the kernel module for the USB3 network port

(section 36.5 in IoT guide)
To get the second network port working, you need to install the correct kernel module for the chipset it is using. In the case of the CM4 base board, the chip is rtl8153. Unfortenately there is no exact match or that chip (yet/ever ?), but rtl8152 will work fine. Use ‘opkg’ to install the module:

opkg update
opkg install usbutils
opkg install kmod-usb-net-rtl8152

For further configuration, I also add a more user-friendly text editor than ‘vi’:

opkg install nano

Verify by ‘ifconfig eth1’ that the second network adapter shows up.

Switching the eth0 / eth1 interfaces to have eth0 for WAN

Now that we have both interfaces visible, we can switch their usage as described in the IoT guide. For my network (LAN side) I use a network mask of /16, so I cannot be on that same IP range for the network on the inside of the CM4 router.
For the inside, I choose (from the private IP-series) 172.16.3.0/24, and will give my CM4 router the IP address 172.16.3.1.

Change the old ‘lan’ section to ‘wan’ and add a new “lan” section in /etc/config/network:

config interface 'wan'
	option ifname 'eth0'
	option proto 'dhcp'

config interface 'lan'
	option proto 'static'
	option ifname 'eth1'
	option ipaddr '172.16.3.254'
	option netmask '255.255.255.0'
        option type 'bridge'

Configure DHCP on the LAN interface
Add a “dhcp” section for eth1 in /etc/config/dhcp:

config dhcp 'eth1'
	option start '100'
	option leasetime '12h'
	option limit '150'
        option interface 'eth1'

Reboot the CM4 router, connect your uplink cable to eth0 and a computer to eth1. When the CM4 router has started, and if everything works well, and the computer should get an IP address on the 172.16.3 network (in the range from .100 to .250).

LuCI confusion by manual configuration

Access the web interface on http://172.16.3.254, set a password for the web interface.
The first time you access the network configuration for your manually configured CM4 router, LuCI will ask to update the configuration to the new format (for using ‘br-lan’ instead of “option type ‘bridge'”) and using the ‘br-lan’ device instead of the manually entered ‘ifname’ in the lan section. Allow these changes, and the GUI is ready for use.

Configuration

The first step is to go to System/Software in the menu and click the “Update lists” button to refresh/create the list of available plugins for OpenWrt. Then use the many OpenWrt guides online for additional configuration ideas.

If you during setup have your CM4 router behind another router on the local network, change the firewall setting for WAN to allow inbound access (unless you’re happy with accessing it from a computer on that routers’ LAN interface).
You find that setting under “Network/Firewall”:

You can after this change access the web interface and SSH over the WAN side IP. Do not forget to change back if this router is put on a public network!

That’s it for the basics and getting started with OpenWrt on a Pi4 with dual ethernet interfaces (either with the used CM4 baseboard or a separate USB3 dongle). I have probably missed some of my steps as this guide was written some time after I completed the setup.

Statistics

Add and configure (accept default settings) the package named ‘luci-app-statistics’ to get graphs for CPU usage and network traffic.
Add the module ‘collectd-mod-thermal’ to get graph for CPU temperature. This needs to be enabled in “Statistics/Setup/General plugins”.

Other packages that might be useful

diffutils – if you want to be able to compare content of (configuration) files
bash – a better shell
bzip2, tar, unzip – archive utilities

Application servers

Stuff that are more than just a package, you might wish to run these on the CM4 router. I will write separate guides for these when I get them working.
AdGuard Home – to get rid of many annoying adverts and dangerous links
Unifi controller – if you have Unifi APs
Home Assistant (HA) – to control stuff

Adding more disk space to OpenWrt

It’s a good idea to keep the application servers separate from the OpenWrt MicroSD card or the limited size of the onboard eMMC on some Pi and CM4 models. As the applications can be write intensive, it’s recommended that these have their main activity on a external SSD.

For the preparation of the SSD (connected to one USB3 port), I used the directions (1-4) in https://openwrt.org/docs/guide-user/storage/usb-drives. The only different thing I did here was to drop the existing partition on my new disk (all USB storage devices comes preconfigured with a windows partition on them) and create a Linux partition, then format it as ext4 (also described in that guide).

For managing auto mount of the device, I followed another of the official guides: https://openwrt.org/docs/guide-user/storage/fstab

After the fstab has been created, the mount points can be administered through LuCI (System/Mount Points). Here it is safe to delete root (“/”) and “/boot” from the “Mount Points” list, then enable the mount for the USB drive.

Reboot the CM4 router to see if the external disk comes up and the partition gets mounted. If boot seems stuck, disconnect the USB-drive and see if it continues (this can be done without a monitor attached). If that is the case, the drive you are using are not UAS-compatible, and this has to be disabled in /boot/cmdline.txt
Add the following to the beginning of the line in the file (the file should always be only one line):

usb-storage.quirks=152d:0579:u

You get the values to put in the marked part by using the lsusb command:

root@OpenWrt:~# lsusb
Bus 001 Device 003: ID 413c:2005 DELL DELL USB Keyboard
Bus 002 Device 003: ID 0bda:8153 Realtek USB 10/100/1000 LAN
Bus 001 Device 002: ID 2109:3431  USB2.0 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux 5.15.137 xhci-hcd xHCI Host Controller
Bus 002 Device 002: ID 152d:0579 Intenso Portable SSD
Bus 002 Device 001: ID 1d6b:0003 Linux 5.15.137 xhci-hcd xHCI Host Controller
root@OpenWrt:~#

This is described in greater detail on https://www.pragmaticlinux.com/2021/03/fix-for-getting-your-ssd-working-via-usb-3-on-your-raspberry-pi/

Ivacy VPN settings

Get the OpenVPN-Configs.zip file from here:
https://support.ivacy.com/setup_guide/how-to-setup-openvpn-on-pf-sense/
or (any of the non-Mac and non-Windows files) here:
https://support.ivacy.com/vpnusecases/openvpn-files-windows-routers-ios-linux-and-mac/

Follow the guide
https://support.ivacy.com/setup_guide/how-to-configure-and-install-openvpn-on-your-openwrt-router/

Ivacy-VPN related content in /etc/config/openvpn (as created by LuCI)
For easier configuration, skip the steps in the guide which explains how to configure the VPN connection using LuCI, just add the connection and then hit “Save & Apply” on the basic settings page, then edit the /etc/config/openvpn file directly:

config openvpn 'Ivacy'
        option dev 'tun'
        option nobind '1'
        option comp_lzo 'yes'
        option verb '1'
        option persist_tun '1'
        option client '1'
        option auth_user_pass '/etc/openvpn/userpass.txt'
        option resolv_retry 'infinite'
        option auth 'SHA1'
        option cipher 'AES-256-CBC'
        option mute_replay_warnings '1'
        option tls_client '1'
        option ca '/etc/openvpn/ca.crt'
        option tls_auth '/etc/openvpn/tls-auth.key'
        option auth_nocache '1'
        option remote_cert_tls 'server'
        option key_direction '1'
        option proto 'udp'
        option port '53'
        list remote 'usny2-ovpn-udp.dns2use.com'
        option enabled '1'

Server list
https://support.ivacy.com/servers-list/

Being away from using my OpenWrt CM4s for a while, lost root password

As I initially did this as an experiment to see if the CM4 is good enough to replace my Linksys 3200 router, and to see what more it could be used for, the Linksys were left there alongside the CM4s. When the fan failed in the Waweshare case (which didn’t take long, about a month or so), I disconnected it and have since then not used it.
Coming back to the CM4s now about two years later, I was not able to log in using SSH or LuCI with any of the passwords I use for testing stuff.
This actually happened (the first time) on the second CM4 device, the one with eMMC and no MicroSD card to easily mount in another Linux machine (including the NAS).
Suggestions to reset the root password ranges from pressing the ‘reset’ button at the right moment to go into failsafe mode (don’t know if that is possible with the reset button on the Seeed carrier board) and copying content from /etc/shadow on another computer (which didn’t work).
Usually when this happens it is easy enough to connect the drive to another Linux-running computer, mount it and chroot to its location, but not in case with OpenWrt since there is no ‘passwd’ command in the BusyBox binary for Linux on Raspberry Pi4. I was also not able to start BusyBox from the mounted eMMC storage.

The solution that I came up with, which finally gave me a known root password, was to use another Raspberry Pi 4, then change the password with passwd as follows:
1: install and start usbboot on that other Raspberry Pi4
2: change boot jumper on carrier board to storage mode (eMMC will be connected as a USB drive)
3: plug in the carrier board using the USB-C port
4: ‘usbboot’ should detect the device and load the driver
5: use ‘lsblk’ to see the device/partition name and then mount it
6: copy the PIs /etc/passwd and /etc/shadow to a safe place for restoring when done
7: make backup copies of /etc/passwd and /etc/shadow on the CM4 (even if you will never have any use of them)
8: copy (overwrite) /etc/passwd and /etc/shadow on the Pi4 with the ones on the CM4s eMMC drive
9: use passwd to set the new root password
10: copy back /etc/passwd and /etc/shadow to CM4 eMMC
11: restore /etc/passwd and /etc/shadow on the Pi4
12: unmount eMMC partition and eject the device (‘eject’ command)
13: put ‘boot’ jumper back to original position (on the Seeed Studio carrier board, it should just be removed – putting it on the two other pins will SHORT 5v and GND which could not do anything good)
14: CM4 should now have a working root password

Other carrier boards with dual ethernet ports

Seeed Studio Dual Gigabit Ethernet NICs Carrier Board
Product page
Getting Started with Dual Gigabit Ethernet Carrier Board for Raspberry Pi Compute Module 4 (hardware info)
The latest version of the pre-installed image (2022-07-18, as checked today, 8 Jan 2023) is linked from their Getting Started with OpenWrt guide.

I recently did the same installation on the carrier board from Seeed Studio, with a 8GB/8GB eMMC CM4 on it.
At the time I wrote this guide, the official CM4/Pi4 OpenWrt image did not contain the needed driver for the network adapter on the USB-bus, which lead me into using the bloated image from Seeed Studio.
For the new installation attempt, I found out which driver was used, and was prepared to install it manually.
The driver needed for this board is ‘kmod-usb-net-lan78xx’, but as it’s now included in the official image, no additional steps (except for configuring) eth1 is needed.
Resizing the OpenWrt root partition is done the same way as described above, except the extra steps needed if using a CM4 with eMMC (as described in the “Getting Started” above (follow the instructions for Mac/Linux for installing “usbboot”).

Seeed Studio CM4 Router Board
Another board from Seeed Studio. This one uses a real NIC controller chip (RTL8111E) for the second port to provide better stability and speed:
CM4 Router Board product page

Xpenology – Synology DSM on non-Synology hardware

This bunch of resources need to be reorganized some day.. I just made it to close off a rotting web browser window..

General

https://xpenology.org/
https://xpenology.org/installation/
https://xpenology.club/category/tutorials/
https://xpenology.com/forum/topic/9394-installation-faq/?tab=comments#comment-81101
https://xpenology.com/forum/topic/9392-general-faq/?tab=comments#comment-82390

Specific hardware

https://xpenology.com/forum/topic/20314-buffalo-terastation-ts5800d/
https://en.wikipedia.org/wiki/Haswell_(microarchitecture)

Misc

https://xpenology.com/forum/topic/24864-transcoding-without-a-valid-serial-number/
https://xpenology.com/forum/topic/38939-serial-number-for-ds918/
https://xpenogen.github.io/serial_generator/index.html

https://xpenology.com/forum/topic/29872-tutorial-mount-boot-stick-partitions-in-windows-edit-grubcfg-add-extralzma/
https://xpenology.com/forum/topic/12422-xpenology-tool-for-windows-x64/page/5/

Unsorted

https://xpenology.com/forum/topic/12952-dsm-62-loader/page/75/
https://xpenology.com/forum/topic/28183-running-623-on-esxi-synoboot-is-broken-fix-available/
https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/
https://xpenology.com/forum/topic/7294-links-to-dsm-and-critical-updates/

Synology DSM archive

https://archive.synology.com/download/Os/DSM/6.2.3-25426-3

Errors

https://xpenology.com/forum/topic/14114-usb-stick-no-vidpid/
https://xpenology.com/forum/topic/9853-dsm_ds3617xs-installation-error-the-file-is-probably-corrupt-13/
https://xpenology.com/forum/topic/13253-error-21-problem/

Synology DSM 7 and broken FTP support in curl

I recently updated my DS1517 to DSM 7 and noticed that FTP support has been left out in curl/libcurl they included. This is how I compiled the latest version of curl, including support for all omitted protocols. It still needs more fixing, since I was not able to compile it with SSL support (so no https, which is included in curl in DSM 7).

My guide is for the Synology DS1517 (ARM). You have to download the correct files for your NAS and set the correct options (paths and names) for the compile tools if you have another model.

The problem

For some unknown reason, Synology decided to drop support for all protocols except http and https in the included curl binary with DSM7:

root@DS1517:~# curl --version
curl 7.75.0 (arm-unknown-linux-gnueabi) libcurl/7.75.0 OpenSSL/1.1.1k zlib/1.2.11 c-ares/1.14.0 nghttp2/1.41.0
Release-Date: 2021-02-03
Protocols: http https
Features: alt-svc AsynchDNS Debug HTTP2 HTTPS-proxy IPv6 Largefile libz NTLM NTLM_WB SSL TrackMemory UnixSockets
root@DS1517:~#

The outcome of following this guide:

curl.ftp --version
curl 7.79.1 (arm-unknown-linux-gnueabihf) libcurl/7.79.1
Release-Date: 2021-09-22
Protocols: dict file ftp gopher http imap mqtt pop3 rtsp smtp telnet tftp
Features: alt-svc AsynchDNS IPv6 Largefile UnixSockets

As seen and mentioned above, I was not able to enable SSL in my compiled version, so this will not replace curl included in DSM7, but could be installed in /bin under another name as it has the libcurl statically linked in the binary.

What you need to compile for the Synology

The first thing you need is a Linux installation as a development system containing the Synology toolkit for cross-compiling.
A fairly standard installation will do, at least mine did (but that also includes PHP, MySQL, Apache and other useful stuff). This is preferably done on a virtual machine, but you can of course use a physical computer for it.

You also need the Synology DSM toolchain for the CPU in the NAS you want to compile for. I found the links in the Synology Developer Guide (beta).
There is also supposed to be a online version of the guide, but at least for me, all the links within it were not working.

Get the toolchain
To find out which toolchain you need, run the command ‘uname -a’:

root@DS1517:~# uname -a
Linux DS1517 3.10.108 #41890 SMP Thu Jul 15 03:42:22 CST 2021 armv7l GNU/Linux synology_alpine_ds1517

As seen above, the DS1517 reports “synology_alpine_ds1517”, so you should look for the “alpine” versions of downloads for this NAS.
Get the correct toolchain for your NAS from Synology toolkit downloads. For the DS1517, I downloaded the file “alpine-gcc472_glibc215_alpine-GPL.txz”:
Download and unpack on the development system:

wget "https://global.download.synology.com/download/ToolChain/toolchain/7.0-41890/Annapurna%20Alpine%20Linux%203.10.108/alpine-gcc472_glibc215_alpine-GPL.txz"
tar xJf alpine-gcc472_glibc215_alpine-GPL.txz -C /usr/local/

The above will download and unpack the toolchain to the /usr/local/arm-linux-gnueabihf folder. This contains Linux executables for the GNU compilers (gcc, g++ etc).

arm-linux-gnueabihf-gcc: No such file or directory
Now, whenever you try to execute any of the commands extracted to the bin directory, you will probably get the “No such file or directory” error (even with the correct path and filename and the file is executable).
If you examine the executable files using the ‘file’ command you will discover that these are 32-bit executables:

root@ubu-01:~# file /usr/local/arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc-4.7.2
/usr/local/arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc-4.7.2: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.15, stripped

I found the solution to the problem here:
arm-linux-gnueabihf-gcc: No such file or directory
In short:

dpkg --add-architecture i386
apt-get update
apt-get install git build-essential fakeroot
apt-get install gcc-multilib
apt-get install zlib1g:i386

Now that we have the cross-compiling toolkit working, let’s continue with curl.

Cross-compile curl for Synology NAS

The current version at the time I wrote the guide was 7.79.1, so I download the source and then uncompress it:

wget https://curl.se/download/curl-7.79.1.tar.gz
tar xfz curl-7.79.1.tar.gz
cd curl-7.79.1

Set some variables and GCC options

export TC="arm-linux-gnueabihf"
export PATH=$PATH:/usr/local/${TC}/bin
export CPPFLAGS="-I/usr/local/${TC}/${TC}/include"
export AR=${TC}-ar
export AS=${TC}-as
export LD=${TC}-ld
export RANLIB=${TC}-ranlib
export CC=${TC}-gcc
export NM=${TC}-nm

Build and install into installdir

./configure --disable-shared --enable-static --without-ssl --host=${TC} --prefix=/usr/local/${TC}/${TC}
make
make install

The above will build a statically linked curl binary for the Synology, and put the binary in the ‘bin’ folder indicated by the path specified with –prefix.

The final step is to copy the ‘curl’ binary over to the Synology (not to /bin yet) and test it, use “–version” to check that the binary supports FTP and the other by Synology omitted protocols:

./curl --version
curl 7.79.1 (arm-unknown-linux-gnueabihf) libcurl/7.79.1
Release-Date: 2021-09-22
Protocols: dict file ftp gopher http imap mqtt pop3 rtsp smtp telnet tftp
Features: alt-svc AsynchDNS IPv6 Largefile UnixSockets

If everything seems ok, copy the file to /bin and give it another name:

cp -p curl /bin/curl.ftp

If it complains about different version of curl and libcurl, you failed somewhere when trying to link the correct libcurl statically.

Most useful sources for this article:

https://global.download.synology.com/download/Document/Software/DeveloperGuide/Firmware/DSM/7.0/enu/DSM_Developer_Guide_7_0_Beta.pdf
https://thalib.github.io/2017/02/17/32bit-no-such-a-file-or-directory/
https://curl.se/docs/install.html

Apollo Accelerators – Vampire

Apollo Core (68080)
Apollo Forum
Apollo Accelerators
Apollo Accelerators Wiki: Latest core (500, 600, 1200) | Installing Kickstarts
Vampire 500 V2: Part 1Part 2 (Epsilon’s Amiga Blog)
Checkmate A1500 Plus with Vampire 500V2 (Epsilon’s Amiga Blog)

The Complete Amiga 500 Vampire V500 V2+ Installation Guide (Amitopia)

My Vampire Card has arrived! (Lyonsden Blog)
Installing the Vampire V500 V2+ in my Amiga 500 (Lyonsden Blog)

AmiKit XE for Vampire V2 (AmiKit XE changelog)

majsta.com (Vampire PCB maker)
GOLD 3 Alpha
Quartus Prime (for flashing Vampire using USB Blaster)

Videos

Amiga Vampire CoffinOs – Quick setup and fun (Cotter’s Stuff)


Apollo Vampire – Emulation or Amiga AAA Salvation (Stephen Jones)

Episode 79 68080 Vampire install Amiga 2000 (Chris Edwards)

Amiga 500 Plus & Vampire 500 V2 + Follow Up (Dave’s Game Room)

8/16/2020 Demo of new Apollo OS with Manuel Jesus of Apollo Team, Tiny Bobble & EPIC Unboxing (Amiga Bill)

Windows for UNIX users

Command examples written for manipulating files on an old drive with remaining crap from windows.

chown, rm: Delete old windows / program files from second drive

chown -r root:root <Directory>

takeown /F "F:\ProgramData" /A /R /D Y
icacls "F:\ProgramData" /T /grant administrators:F

rm -rf <Directory>
(after chown above)

rd /s /q "F:\ProgramData"

ln
https://www.howtogeek.com/howto/16226/complete-guide-to-symbolic-links-symlinks-on-windows-or-linux/

cp -rp
xcopy <source>\*.* /s/e/f <dest>

Inner secrets of Synology Hybrid RAID (SHR) – Part 2b – My Synology case

At about 30% into the reshaping phase (after the first disk swap), my NAS went unresponsive (disconnected both shell and GUI), and I had to wait all day until I came home and did a hard reset on it and hoped everything went well..

In the meantime, I logged a case to the Synology support. They were not of any direct help, and the hard reset did take the NAS back to continuing the reshaping process.

My case with Synology support

==
2020-12-01 13:51:37
==
Replaced one of the smallest drives in my NAS yesterday (SHR) as a first step for later expansion (I will replace all drives with larger ones before expanding – if possible to delay any automatic expansion until then).

About 80% finished with rebuilding yesterday, but for some reason it started over after the first round.

Today about 30% finished when I lost the connection to the NAS (over ssh and the web interface). It does not auto-reboot and does not respond to ping.

To lessen the risk of data loss, what should my first step be ? Can I just pull the plug and hard-reboot the NAS with the current disks mounted (14TB, 3TB, 3TB, 8TB, 8TB in a SHR config), or is it better to replace or remove the disk that I recently replaced (in slot 1: 14TB in place of the previous still untouched 3TB) ?

What are the steps to getting the volume back online if it does not mount automatically ?

As the NAS is down, I am not able to upload any logs, but attached is the rebuild status before the crash.

==
2020-12-01 15:28:58
Synology response (besides the auto response “send us logs”)
Not useful at all, exactly what I did, “Mark” who replied did not read anything..
==
Hello,

Thank you for contacting Synology.

If you wish to replace a drive in your unit, please perform these steps one by one allowing for the repair to complete before replacing any further drives.
1. Pull out the drive in question.
2. Insert a replacement drive.
3. Proceed to the Storage Manager > Storage Pool > select the volume in question and click “Manage/Action”
4. Run through the wizard to repair the volume in question with the replacement drive.
5. Once complete, proceed to the Storage Manager > Volume and Configure/Edit the volume to configure the volume to have additional size.
Please see the link below for more help.
https://www.synology.com/en-uk/knowledgebase/DSM/help/DSM/StorageManager/storage_pool_expand_replace_disk

Please bare in mind that you benefit from the additional space from the drives you will need to replace at least 2 drives for larger ones in RAID 5/SHR or 3 drives in RAID6/SHR2.
You can see the type of RAID used via – DSM > Storage Manager > Storage Pool.

If you have any further questions please do not hesitate to get in touch.

Best Regards,
Mark

==
2020-12-01 16:02:14
My reply
==
Ok, so I restart the problem description then:

I did (yesterday):
0. Power down Synology
1. Pull out the drive in question.
2. Insert a replacement drive.
3. Proceed to the Storage Manager > Storage Pool > select the volume in question and click “Manage/Action”
4. Run through the wizard to repair the volume in question with the replacement drive.

THEN, today:
4b. Today about 30% finished when I lost the connection to the NAS (over ssh and the web interface). It does not auto-reboot and does not respond to ping.

SO what now ?
As the NAS is unresponsive I will never reach step 5:

To lessen the risk of data loss, what should my first step be ? Can I just pull the plug and hard-reboot the NAS with the current disks mounted (14TB, 3TB, 3TB, 8TB, 8TB in a SHR config), or is it better to replace or remove the disk that I recently replaced (in slot 1: 14TB in place of the previous still untouched 3TB) ?

What are the steps to getting the volume back online if it does not mount automatically ?

Also, is there an option to DELAY the expansion until all drives have been replaces, as you replied changeing the first drive will not expand the volume, but I’m not there yet since I’m stuck in a crash (unresponsive system)

==
2020-12-02 23:25:46
My reply on Synologys’ suggestion to collect logs using the support centre
==
How do I launch “Support Center” on the device when it is unresponsive (which was my initial question – what to do when it hangs in the middle of repairing/reshaping) ?

I forced it off and restarted and hoped for the best – reshaping continued and the second disk is now in reshaping mode.

My other question has not yet been answered:

Is it possible to delay the time consuming step of reshaping until all disks have been replaced ?

Initial configuration: 3TB 3TB 3TB 8TB 8TB

After replacement of the first disk: 14TB 3TB 3TB 8TB 8TB, after reshaping the first disk got a partition to match the 8TB disks.

After replacement of the second disk: 14TB 14TB 3TB 8TB 8TB, while reshaping again, now disk 1 and 2 looks similar with one partition matching the largest of the remaining 3TB disk, one matching the largest on the 8TB disks and the remainder (roughly about 6TB) the same on both 14TB disks.

When replacing the third 3TB disk, I assume the following would happen:
(14TB 14TB 14TB 8TB 8TB)

On the first and second disk, the (about) 3TB partition will be replaced with a partition to match the 8TB disks. Then the remainder (3 disks with 6TB unallocated space) will be used for another raid5 (after yet another reshape)

So my question again; is it possible to delay reshaping until I have had all the disks replaced. I understand that the “rebuild” is needed in between every replacement, but “reshape” should be needed only once.

==
2020-12-03 12:19:07
Synology response
==
Hello,

Thank you for the reply.

I’m afraid you cannot delay or prevent this process, once it starts it needs to run until fruition.

I would suggest to leave this running for now, if the volume does crash fully in the mean time I can take a look at what we can do to recover the volume, but there is not much I can do currently I’m afraid.

If you have any further question please do not hesitate to get in touch.

Best Regards,
Mark
==

The crash

https://unix.stackexchange.com/questions/299981/recover-from-raid-5-to-raid-6-reshape-and-crash-mdadm-reports-0k-sec-rebuild
https://www.google.com/search?q=restart+synology+while+rebuilding
https://community.synology.com/enu/forum/17/post/20414

General SHR and mdraid links

https://www.youtube.com/results?search_query=synology+shr
https://bobcares.com/blog/raid-resync/
https://www.google.com/search?q=mdraid+reshape