Oracle cloud – I lost my public IP

14 April 2022 a lot of Oracle Cloud users got an email stating their VM public IPs have been lost.

Read the easy step-by-step solution following the email below to get (new) public IPs for your virtual machines.

Oracle Cloud Infrastructure Virtual Cloud Network – Issue Identified impacting Public IPs
Oracle Cloud Infrastructure Customer,

We have identified an issue affecting a subset of customers who have become unable to access their Oracle Cloud Infrastructure resources.

Customer Impact: Some customers with Free Tier accounts, using Ephemeral or Reserved Public IPs will be unable to access their resources due to the unintentional reclamation of the IPs associated with their Virtual Machines.

While we have taken steps to ensure no further impact occurs, any affected Public IPs will need to be re-established by reassigning a new Public IP through the Oracle Cloud Infrastructure Console, REST API, SDK CLI or other tools, as described in the following documentation:

If a preferred public IP is configured, the public IP assignment may still be reassigned subject to its availability.

Assign a new IPv4 address to your virtual machines:
1. Log in to Oracle Cloud (you have the URL somewhere in an email)
2. Find your machines (the listing), menu: compute / instances
2b. You might have to select the compartment where your VMs are located, even if you only have the ‘root’ compartment.
3. In the machine list, click the machine name.
4. Scoll down to the “Resources” section (at the left edge), click “Attached VNICs”.
5. In the VNIC list, click the name (Primary VNIC).
6. Scroll down to “Resources”, and click “IPv4 Addresses”.
7. At the right side of the window, click the three dots (which are hidden beneath the “Support” icon), then click “Edit” from the menu that pops up.
8. Click the “Ephemeral public IP” option, fill in an optional name, then click “Update”

Now, the remaining steps are updating DNS for stuff pointing to the servers (if you have any), and updating connections (SSH) to reflect the new IP.

I revived “Quizzer”

Quizzer was written by me mostly in between 1999 and 2000. I wrote this system entirely in Perl (CGI script on a Solaris host) because there was no good enough applications out there. As this was a private project, I did no attempts to sell it (even if I had it prepared for that, see the extensive documentation).
You can find Quizzer up and running on
Documentation updated to some point in time:

Most of the question databases (plain text following some rules) were rewritten from existing resources, but the questions shown in the video is from what I wrote myself from reading the Solaris 8 System Admin manuals.

Preparing the new server for CGI execution

Besides my standard setup for a Linux server for Apache/PHP/MySQL, I also switched over to using fcgid and php-fpm to be able to use PHP 8.1 as default and use a per-directory or per-vhost configuration to switch over to PHP 7.4 when needed.
Enable CGI-execution module for Apache

a2enmod cgid

Enable CGI-execution for the virtual host
Add these lines to the virtual host configuration. The below additions also adjusts what is considered to be an index page and adds configuration to prevent downloading of files with some specific extensions (this should be done in the server main configuration).

  DirectoryIndex index.cgi index.php index.html index.htm
  <Directory /var/www/>
    AllowOverride All
    Options +ExecCGI
  AddHandler cgi-script .cgi
  <FilesMatch "\.(?:inc|pl|py|rb)$">
    Order allow,deny
    Deny from all

Check that CGI-script works
Use this simple CGI script to check that it works (test.cgi):

print "Content-type: text/html\n\n";
print "Hello, World.";

Also, the script has to be executable, then restart apache to reload configuration:

chmod 700 test.cgi
service apache2 restart

Updating the code for a new Perl version

(Screens from my actual code)
How to make Perl include files in the current directory

At some point in time, Perl got a security fix that no longer allows the current directory (the script directory) to be considered when including other code files. This broke my script badly.

There are several methods around this problem, and I ended up solving it my own way: I wrote a two-line wrapper for ‘/usr/bin/perl’, and saved it as ‘/usr/local/bin/perl’ (which was my command line in all scripts):

PERL_USE_UNSAFE_INC=1 /usr/bin/perl $1

This method required no modification of any of my source files to get them execute correctly and find their included files.

defined not allowed on array anymore
For some reason, it is no longer possible to use ‘defined @array’ to check if the variable has been set. So I had to replace every occurrence of the ‘defined @’ with just ‘@’, which made my code much more unreadable:



According to Perldoc:

After these modifications everything worked fine, except some small configuration mistakes of the quiz system itself (handling compressed question databases and pointing to some incorrect temporary locations).

Test it, use it if you wish

It took me some to find out how to create new users for storing personal test history. I had made this as simple as you just have to type in anything unique (not already registered) that looks like an email address, and a password you want to use.
The system sets up a demo account for you if that user name is not in use.
“Personal” history for the non-logged in demo user looks like this:
(upper part)

(graphical overview)

(detailed report)

The “Find a hole” challenge is off

As this is old revived code, and no reports of holes in the code were reported at the time it was online (1999-2002), I had to make a hole 🙂
This is valid as long as I make no new databases for the system (then if that happens, I decide what to do at that point).

Get full access to all UNIX questions
All m$ questions are available in demo mode, so no fully activated account needed for these. I recommend you create your own personal ‘demo’ account for the m$ questions to be able to view history.

So: simply use your external IP-address as the user name, and the password “FullAcccess2022” to give yourself a fully enabled user 🙂

JottaCloud secrets

I dug into the sqlite databases used by the JottaCloud client (and branded ones like Elgiganten) and found something that can be useful for other diggers…

This documentation is for the windows version of the client. The path to the database files and the path formats within the databases will differ for the client for other OSes.


This method works for finding the location on the windows version:
Open the client interface, go to settings, then under the “General” tab, you will find a button that opens the log file location:

A window with the location ‘C:\Users\{myuser}\AppData\Roaming\Jotta\JottaWorld\log’ will be opened. Go to the parent directory, and there you will find the ‘db’ directory.

Keep this location open and QUIT the Jotta client (from the taskbar or any other effective method)

Copy the ‘db’ (or its parent ‘JottaWorld’) folder to a work- (or backup) location. NEVER do anything without having a backup copy of the ‘db’ folder, or even the whole ‘JottaWorld’ (parent) folder in case something goes wrong.

Examining the databases

From here, I will be examining each of the databases (.db files) and go through what I’ve found out. I will use the sqlite3 client supplied by microsoft-invented Ubuntu, the alternative is (on windows) to use a native sqlite3 client the same way, or just copy the ‘JottaWorld’ or ‘db’ directory to a computer with Linux (or any other real operating system) installed.

Basic sqlite3 usage

To open the database in sqlite3, simply use the sqlite3 command followed by the database name:

sqlite3 c.db

To show all tables in a database:


To show the table layout:

.schema {table name}

Select and update statements works basically as in other SQL clients.

c.db (outside the ‘db’ folder)

An empty database with a single table ‘c’, defined as:

CREATE TABLE c (id INTEGER PRIMARY KEY ASC AUTOINCREMENT,type integer, time integer, size integer, attempts integer, checksum string, path string, known );

The use of it is for me unknown (as the table is empty in my db).
This database was last changed almost two years before I stopped the Jotta client.


Contains only one table ‘requests’ defined as

CREATE TABLE requests (id integer primary key autoincrement, callerid integer, localpath, remotepath, created integer, modified integer, revision integer, size integer, checksum varchar(32), queue integer, state integer, attempts integer, flags integer );

The use of it is for me unknown (as the table is empty in my db).
This database was last changed a week before I stopped the Jotta client.


Database for the Jotta Sync folder. This folder is by default synced in full on all computers set up against the same Jotta account. There is no selective sync or OneDrive-like on-demand sync in Jotta, the only option is to completely disable the sync folder on the “Sync” tab in the settings. The sync folder location can be changed there too.





Information about all files


Information about all folders


Files checksummed and queued for transfer


Shared files and folders within the sync folder

The table is defined as:

CREATE TABLE jwt_folders (jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT, jwc_stateid, jwc_remotepath, jwc_remotehash, jwc_localpath, jwc_localhash, jwc_basepath, jwc_relativepath, jwc_folderhash , jwc_state, jwc_parent, jwc_newpath);

Folder id, used in the jwc_parent column and in jwc_files


empty on the data I have


Path to the folder at Jotta, starting with ‘/{Jotta user name}/Jotta/Sync/’


md5sum of the folder (?) a folder cannot be hashed


The full local path to the folder


md5sum of the folder (?) a folder cannot be hashed


empty on the data I have


Path relative to the Sync folder location, empty on many of the entries


empty on the data I have


State as cleartext ‘Updated’ if all files are synced


id (jwc_id) of parent folder


empty on the data I have

The table is defined as:

CREATE TABLE jwt_files (jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT, jwc_remotepath, jwc_remotesize INTEGER, jwc_remotehash, jwc_localpath, jwc_localsize INTEGER, jwc_localhash, jwc_relativepath, jwc_created INTEGER, jwc_modified INTEGER, jwc_updated INTEGER, jwc_status, jwc_checksum, jwc_state, jwc_uuid, jwc_revision , jwc_folderid, jwc_newpath);

File id


Path to the file at Jotta, starting with ‘/{Jotta user name}/Jotta/Sync/’


File size on the remote end (should match localsize)


md5sum of something at the remote end


The full local path to the file


File size on the local side (should match remotesize)


md5sum of something at the local side


Path relative to the remote location, empty on many of the entries


timestamp of file creation


timestamp of file modification


zero on all my files


empty on the data I have


file md5 checksum


either ‘UpdatedFileState’ or ‘MovingFileState’ (used on renamed files, see ‘jwc_newpath’)


don’t know, ‘{00000000-0000-0000-0000-000000000000}’ on most files


0, 1 or 11 on all my files


id (jwc_id from jwt_folders) of containing folder


New local name of a file renamed because of an upload error

The table is defined as:

CREATE TABLE jwt_queuedfiles (jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT, jwc_remotepath, jwc_remotesize INTEGER, jwc_localpath, jwc_localsize INTEGER, jwc_relativepath, jwc_created INTEGER, jwc_modified INTEGER, jwc_status, jwc_checksum, jwc_revision INTEGER, jwc_queueid, jwc_type, jwc_hash , jwc_folderid);

It was empty in my current copy of the database, but it should be more or less like jwt_files (used only temporarily).

The table is defined as:

CREATE TABLE jwt_shares (jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT, jwc_shareid, jwc_localpath, jwc_remotepath, jwc_owner, jwc_members );

Mostly self-explanatory, except for the two fields I’m unable to explain 🙂
jwc_shareid is in the form of jwc_uuid given above, jwc_owner is probably some secret string about my user (at Jotta) that I’m not supposed to share. It’s an 24 character alphanumeric string.


Contains only one table ‘jobs’ defined as

CREATE TABLE jobs (id integer primary key autoincrement, status integer, uri, name, path, databasepath, files integer, bytes integer );

The use of it is for me unknown (as the table is empty in my db).
This database file was last changed almost a year before I stopped the client.


Backup folders. This is the only table I have made manual changes to (I made the listed folder name in the GUI more obvious on some entries). Never change anything without having a backup, and never change anything while the client is running.



The backup schedule (Schedule tab in settings)


Backup copy of the backup schedule


Files and folders excluded from backup


Internal backup copy of the excludes table


All backup folders set in the client

backup_schedule and backup_schedule_copy
The backup schedule in settings seems to be a very simplified one. By modifying the database it looks like they prepared to allow for different backup time settings every day (I don’t know if it works).
The table is defined as:

CREATE TABLE backup_schedule(id INTEGER PRIMARY KEY, mountpoint INTEGER, start_day TEXT, start_hour INTEGER, start_minute INTEGER, end_day TEXT, end_hour INTEGER, end_minute INTEGER);

All self-explanatory except “mountpoint”, which is set to “-1” when I create a schedule. If the schedule is set to any of the multi-day variants (“weekends”,”weekdays”,”everyday”) there will be multiple entries in the database, one for each day:

sqlite> select * from backup_schedule;
sqlite> select * from backup_schedule;

My guess about the ‘mountpoint’ column (which is set to “-1” by the schedule settings in the client) is that it refers to the ‘mountpoints’ table, so theoretically it should be possible to create separate schedules for every one of the mountpoints by directly entering them into the database…
The ‘backup_schedule_copy’ table contains the schedule before making changes through the client.

excludes and excludes_copy
All files and folders that are excluded by the backup. This also includes the system and hidden files and folders that are not backed up. From the client settings, it is possible to include hidden files and folders.
The table is defined as:

CREATE TABLE excludes(id INTEGER PRIMARY KEY, mountpoint INTEGER, pattern VARCHAR(1024));

Not much to explain here. ‘mountpoint’ is set to ‘-1’, and I find no possible use for it to match an entry in the ‘mountpoints’ table. ‘pattern’ allows for simple pattern matching (*) for the full local path of a file or folder to exclude from backup.

This table contains all the backup folders defined in the client.
The table is defined as:

CREATE TABLE mountpoints(jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT,jwc_name,jwc_path,jwc_device,jwc_description,jwc_status,jwc_location,jwc_type,jwc_ip,jwc_suspended );

Name displayed in the client


The path for the folder to backup


Computer name (for the Jotta side ?)


Computer name


Status, can be any of the following:


‘Local’ or ‘Remote’


Zero on all my entries

jwc_ip for local paths, empty for remote


“Suspended” for paused backups, blank otherwise

I find the content of jwc_status to more often be incorrect than correct, while writing this it is scanning one of my network drives, but in the database it says “Uploading”. Many entries are “Up to date” according to the client, but listed as different things in the db.


This sqlite3 database file is without its extension (.db).
Contains a table with queued uploads (scanned files, on queue for checksumming):


blob contains array of file information


Another sqlite3 database file is without the extension (.db)
Contains a table with queued uploads (checksummed, waiting for upload slot):


blob contains array of file information


Contains information on all backed up files


Information for all backed up files


Information for all backed up folders



The table is defined as:

CREATE TABLE folders (id integer primary key autoincrement, path text UNIQUE, state integer, parent integer, mountpoint integer, checksum varchar(20));

Full local path to the folder


Contains a value of 1,2,5,6 or 7 in my database, have no idea of what it represents


Id of parent folder (in this table)


mountpoint id in mm.db


md5 checksum on something (a folder cannot be checksummed)

The table is defined as:

CREATE TABLE files (id integer primary key autoincrement, path text UNIQUE, parent integer, size integer, modified integer, created integer, checksum varchar(16), state integer, mountpoint integer);

The full path of the backed up file


the id of the containing folder (in folders table)


file size


timestamp of modification


timestamp of creation


md5 checksum of file


Contains a value of 6 or 7 in my database, have no idea of what it represents


mountpoint id in mm.db

So why all this trouble analyzing the database ?

I wanted an easy way of finding my files by its md5 checksum, that was one of the reasons. Another thing (not solved yet) is that I want to find out the way of recreating the share link for a specific file or folder within a public shared folder on my Jotta account (this without going through the web interface, I mean, it’s already shared inside an accessible folder).

Odd things noticed are that there are md5 checksums for folders, and three different ones in the sync folder (the jwt_files and jwt_folders tables in the dlsq.db), but for the individual files there is only the files’ real md5 checksum.

Anyway… that investigation will continue some other day…

piStorm – Preparing the SD-card for Emu68

This guide is a continuation / restart of piStorm – getting started with Emu68, which was written as a starters’ guide for just getting Emu68 up and running on the piStorm.

Since I wrote that guide, Michal has added similar instructions as I present here to the resources at GitHub.


Emu68 for piStorm Nightly build
Emu68 Docs section at GitHub

Getting the files you need

As described before, you should look for the latest file named something like “”. The content of this file will in the last step be copied to the root of the Fat32 partition of the SD-card.

Preparing the SD-card

Emu68 presents partitions with the 0x76 ID as hard drives to the Amiga side through the “brcm-sdhc.device”, so we need to create at least two partitions on the SD-card (which normally comes prepared as a single Fat32 partition).
I do this using entirely using the command line program “diskpart” on my Windows computer.

Find the “Diskpart” application, either somewhere in the windows menu, or using the search function and search for “cmd”. Right click the icon and select “Run as administrator”. You will be getting a warning that you are going to do something dangerous, accept that one 🙂

The dangerous part

Insert your SD-card, then run “diskpart” from the command prompt. List all recognized disks by the diskpart command “list disk”. If you find the obvious disk (in my case it’s “disk 3”) that must be your SD-card, use the “select disk” command to select it as the current one. List the partitions on the disk with “list part” to ensure you are working on the right disk.

In any case of uncertainty, exit “diskpart” by using the “exit” command, then remove the SD-card and run “diskpart” again and list the disks. The missing one is your SD-card.

Use the “clean” command to remove all the partitions on the SD-card (as seen as the last command in the image above, and as the first in the image below). Create the Fat32 partition. Only a few MB is needed, but I usually allocate 200MB for the Emu68 and kickstart files.
As shown below, I create a 500MB partition, a 2GB partition and one for the rest of the space on the card (in this case 26GB), and with that same command I set the partition ID to 0x76 (which is needed to be specified, so the Amiga can find the emulated disks).

Exit “diskpart” with the command “exit”, and then exit the command line shell, also with “exit”.
Commands above (create partitions):

cre part pri size=200
cre part pri size=500 id=76
cre part pri size=2000 id=76
cre part pri id=76

Give the Fat32 partition a drive letter

sel part 1

Format the Fat32 partition
This can be done in many ways, but as we already are inside ‘diskpart’, I present the easiest way first 🙂

format fs=FAT32 label=Emu68 quick

Another way is either accept to format the partition in the request that windows pops up directly after the “assign” command in ‘diskpart’, or as the same, as described below:
Go to “This PC” in explorer (the file explorer, not the ancient web browser), right click the small partition on the SD-card and select “Format”.

Copy the files from the latest nightly Emu68 to the root of the SD-card. This can be done using WinRAR ( without extracting the files. Just select the latest nightly archive, right click and choose “Extract files…”, then type in the drive letter for the 200MB partition:

Copy your choice of Kickstart ROM (usually a kickstart for the A1200) file to the root of the Fat32 partition and update the config.txt accordingly.

Now the SD-card is ready to be booted on with the piStorm. Boot from some floppy with the AmigaOS hard drive installation utilities, then change the HDToolBox tooltype SCSI_DEVICE_NAME to brcm-sdhc.device then start HDToolbox and set up the partitions on the disks with IDs other than 0 (zero) (which represents the whole disk and should not be used within AmigaOS).

This procedure is well described in the current documentation by Michal (and on a lot of other places), so head over there and read his guide.
If you decide to install AmigaOS 3.2, you do not need to use the PFS3aio filesystem. FFS works fine with large disks and partitions in this release.

piStorm – getting started with Emu68

In this guide, which will be my shortest ever, I explain how to get started with the Emu68 barebone JIT emulator for the piStorm.


Emu68 for piStorm Nightly build

The shortest instructions ever 🙂

Download the latest nightly build of early alpha Emu68 for piStorm from the resource above. You should be looking for a file named “” or similar (note that the files are sorted in forward alphabetically order, and the latest are a bit down on the list).

Extract the files to the root of a fat32-formatted SD-card.

Copy your Amiga kickstart file to the root of that card.

Edit the configuration file (config.txt) and set the kickstart file name (last line in the included config file):

# PiStorm variant - use initramfs to map selected rom
initramfs kick.rom

Insert the SD-card in your pi3a+ mounted to the piStorm. Power on the Amiga and enjoy the extremely short startup time 🙂

The setup is now ready for boot from floppy (although many games does not work yet, at least not booting from floppy, but Workbench floppies do, as the Install3.2 disk to install AmigaOS onto a hard drive).

Installing AmigaOS on a hard drive (partition on SD-card)

For hard drive setup, there will be more steps involved, such as partitioning the sd-card into at least one boot partition and one or more Amiga partitions.

piStorm – Preparing the SD-card for Emu68

OpenWrt on Raspberry Pi 4

Installation and configuration notes

Stuff used
Raspberry Pi 4 Compute Module (CM4, 4GB)
Waveshare Dual Gigabit Ethernet Base Board (with case)

OpenWrt Wiki
OpenWrt Firmware for Raspberry Pi 4

(Very thin documentation on the CM4 baseboard used, nothing about the USB3 network port, but some info on the RTC, fan control and display and camera interfaces)

Internet of Things – a techie’s viewpoint
(I used mainly the beginning of chapter 36 for the first good enough solution I found on how to switch the interfaces so that eth0 will be used for WAN and eth1 for LAN)


Get the latest (stable) version of OpenWrt (I use “Factory (EXT4)”), write it to a MicroSD-card the usual way, insert into slot on CM4 board and boot up.

Note: Before booting the SD-card, you might want to resize the Linux partition and file system on it. Do this with another Linux-based system:
Insert the SD-card into a reader/card slot and check end of ‘dmesg’ output which device was assigned the card:

root@DS1517:~# dmesg |tail
[13376.702534] sd 10:0:0:1: [sdr] 61849600 512-byte logical blocks: (31.6 GB/29.4 GiB)
[13376.714483]  sdr: sdr1 sdr2

In this case (on my Synology NAS), the card readers’ slot was assigned ‘sdr’.

Resize the partition with ‘parted’:

root@DS1517:~# parted /dev/sdr
GNU Parted 3.2
Using /dev/sdr
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: TS-RDF8 SD Transcend (scsi)
Disk /dev/sdr: 31.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      4194kB  71.3MB  67.1MB  primary               boot, lba
 2      75.5MB   173MB   104MB  primary

(parted) resizepart 2 -1
(parted) q

Resize the file system with ‘resize2fs /dev/sdr2’

The default is to use eth0 for LAN, which I didn’t like (with the possibility that the other USB3-based interface might be less stable, and is using kernel drivers for the incorrect model).
To fix this I used the guide mentioned above, the beginning of chapter 36, with some modifications to fit my network.

(section 36.4 in IoT guide)
The later distributions of OpenWrt starts up as logged in as root on the console, which makes it easier to do the initial adjustments to the network settings. As the guide mentions, if your home network is on the subnet, you can access the shell by SSH (root without password) to do the modifications.
Change the lan section of /etc/config/network to:

config interface 'lan'
    option ifname 'eth0'
    option proto 'dhcp'

Reboot the Pi, and you will get IP by DHCP (handed out by your old router). Either find that IP in the old router or just do a “ifconfig” command on the console.

Installing the kernel module for the USB3 network port

(section 36.5 in IoT guide)
To get the second network port working, you need to install the correct kernel module for the chipset it is using. In the case of the CM4 base board, the chip is rtl8153. Unfortenately there is no exact match or that chip (yet/ever ?), but rtl8152 will work fine. Use ‘opkg’ to install the module:

opkg update
opkg install usbutils
opkg install kmod-usb-net-rtl8152

For further configuration, I also add a more user-friendly text editor than ‘vi’:

opkg install nano

Verify by ‘ifconfig eth1’ that the second network adapter shows up.

Switching the eth0 / eth1 interfaces to have eth0 for WAN

Now that we have both interfaces visible, we can switch their usage as described in the IoT guide. For my network (LAN side) I use a network mask of /16, so I cannot be on that same IP range for the network on the inside of the CM4 router.
For the inside, I choose (from the private IP-series), and will give my CM4 router the IP address

Change the old ‘lan’ section to ‘wan’ and add a new “lan” section in /etc/config/network:

config interface 'wan'
	option ifname 'eth0'
	option proto 'dhcp'

config interface 'lan'
	option proto 'static'
	option ifname 'eth1'
	option ipaddr ''
	option netmask ''
        option gateway ''
        option type 'bridge'

(I don’t think the “gateway” config is needed in ‘lan’, but have to check that)

Configure DHCP on the LAN interface
Add a “dhcp” section for eth1 in /etc/config/dhcp:

config dhcp 'eth1'
	option start '100'
	option leasetime '12h'
	option limit '150'
        option interface 'eth1'

Reboot the CM4 router, connect your uplink cable to eth0 and a computer to eth1. When the CM4 router has started, and if everything works well, and the computer should get an IP address on the 172.16.3 network (in the range from .100 to .250).

LuCI confusion by manual configuration

Access the web interface on, set a password for the web interface.
When you first access the web interface for your manually configured CM4 router, LuCI will ask to update the configuration to the new format (for using ‘br-lan’ instead of “option type ‘bridge'”) and using the ‘br-lan’ device instead of the manually entered ‘ifname’ in the lan section. Allow these changes, and the GUI is ready for use.


The first step is to go to System/Software in the menu and click the “Update lists” button to refresh/create the list of available plugins for OpenWrt. Then use the many OpenWrt guides online for additional configuration ideas.

If you during setup have your CM4 router behind another router on the local network, change the firewall setting for WAN to allow inbound access (unless you’re happy with accessing it from a computer on that routers’ LAN interface).
You find that setting under “Network/Firewall”:

You can after this change access the web interface and SSH over the WAN side IP. Do not forget to change back if this router is put on a public network!

That’s it for the basics and getting started with OpenWrt on a Pi4 with dual ethernet interfaces (either with the used CM4 baseboard or a separate USB3 dongle). I have probably missed some of my steps as this guide was written some time after I completed the setup.


Add and configure (accept default settings) the package named ‘luci-app-statistics’ to get graphs for CPU usage and network traffic.
Add the module ‘collectd-mod-thermal’ to get graph for CPU temperature.

Ivacy VPN settings

Get the file from here:
or (any of the non-Mac and non-Windows files) here:

Follow the guide

Ivacy-VPN related content in /etc/config/openvpn (as created by LuCI)
For easier configuration, skip the steps in the guide which explains how to configure the VPN connection using LuCI, just add the connection and then hit “Save & Apply” on the basic settings page, then edit the /etc/config/openvpn file directly:

config openvpn 'Ivacy'
        option dev 'tun'
        option nobind '1'
        option comp_lzo 'yes'
        option verb '1'
        option persist_tun '1'
        option client '1'
        option auth_user_pass '/etc/openvpn/userpass.txt'
        option resolv_retry 'infinite'
        option auth 'SHA1'
        option cipher 'AES-256-CBC'
        option mute_replay_warnings '1'
        option tls_client '1'
        option ca '/etc/openvpn/ca.crt'
        option tls_auth '/etc/openvpn/tls-auth.key'
        option auth_nocache '1'
        option remote_cert_tls 'server'
        option key_direction '1'
        option proto 'udp'
        option port '53'
        list remote ''
        option enabled '1'

Server list

Xpenology – Synology DSM on non-Synology hardware

This bunch of resources need to be reorganized some day.. I just made it to close off a rotting web browser window..


Specific hardware



Synology DSM archive


Synology DSM 7 and broken FTP support in curl

I recently updated my DS1517 to DSM 7 and noticed that FTP support has been left out in curl/libcurl they included. This is how I compiled the latest version of curl, including support for all omitted protocols. It still needs more fixing, since I was not able to compile it with SSL support (so no https, which is included in curl in DSM 7).

My guide is for the Synology DS1517 (ARM). You have to download the correct files for your NAS and set the correct options (paths and names) for the compile tools if you have another model.

The problem

For some unknown reason, Synology decided to drop support for all protocols except http and https in the included curl binary with DSM7:

root@DS1517:~# curl --version
curl 7.75.0 (arm-unknown-linux-gnueabi) libcurl/7.75.0 OpenSSL/1.1.1k zlib/1.2.11 c-ares/1.14.0 nghttp2/1.41.0
Release-Date: 2021-02-03
Protocols: http https
Features: alt-svc AsynchDNS Debug HTTP2 HTTPS-proxy IPv6 Largefile libz NTLM NTLM_WB SSL TrackMemory UnixSockets

The outcome of following this guide:

curl.ftp --version
curl 7.79.1 (arm-unknown-linux-gnueabihf) libcurl/7.79.1
Release-Date: 2021-09-22
Protocols: dict file ftp gopher http imap mqtt pop3 rtsp smtp telnet tftp
Features: alt-svc AsynchDNS IPv6 Largefile UnixSockets

As seen and mentioned above, I was not able to enable SSL in my compiled version, so this will not replace curl included in DSM7, but could be installed in /bin under another name as it has the libcurl statically linked in the binary.

What you need to compile for the Synology

The first thing you need is a Linux installation as a development system containing the Synology toolkit for cross-compiling.
A fairly standard installation will do, at least mine did (but that also includes PHP, MySQL, Apache and other useful stuff). This is preferably done on a virtual machine, but you can of course use a physical computer for it.

You also need the Synology DSM toolchain for the CPU in the NAS you want to compile for. I found the links in the Synology Developer Guide (beta).
There is also supposed to be a online version of the guide, but at least for me, all the links within it were not working.

Get the toolchain
To find out which toolchain you need, run the command ‘uname -a’:

root@DS1517:~# uname -a
Linux DS1517 3.10.108 #41890 SMP Thu Jul 15 03:42:22 CST 2021 armv7l GNU/Linux synology_alpine_ds1517

As seen above, the DS1517 reports “synology_alpine_ds1517”, so you should look for the “alpine” versions of downloads for this NAS.
Get the correct toolchain for your NAS from Synology toolkit downloads. For the DS1517, I downloaded the file “alpine-gcc472_glibc215_alpine-GPL.txz”:
Download and unpack on the development system:

wget ""
tar xJf alpine-gcc472_glibc215_alpine-GPL.txz -C /usr/local/

The above will download and unpack the toolchain to the /usr/local/arm-linux-gnueabihf folder. This contains Linux executables for the GNU compilers (gcc, g++ etc).

arm-linux-gnueabihf-gcc: No such file or directory
Now, whenever you try to execute any of the commands extracted to the bin directory, you will probably get the “No such file or directory” error (even with the correct path and filename and the file is executable).
If you examine the executable files using the ‘file’ command you will discover that these are 32-bit executables:

root@ubu-01:~# file /usr/local/arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc-4.7.2
/usr/local/arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc-4.7.2: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/, for GNU/Linux 2.6.15, stripped

I found the solution to the problem here:
arm-linux-gnueabihf-gcc: No such file or directory
In short:

dpkg --add-architecture i386
apt-get update
apt-get install git build-essential fakeroot
apt-get install gcc-multilib
apt-get install zlib1g:i386

Now that we have the cross-compiling toolkit working, let’s continue with curl.

Cross-compile curl for Synology NAS

The current version at the time I wrote the guide was 7.79.1, so I download the source and then uncompress it:

tar xfz curl-7.79.1.tar.gz
cd curl-7.79.1

Set some variables and GCC options

export TC="arm-linux-gnueabihf"
export PATH=$PATH:/usr/local/${TC}/bin
export CPPFLAGS="-I/usr/local/${TC}/${TC}/include"
export AR=${TC}-ar
export AS=${TC}-as
export LD=${TC}-ld
export RANLIB=${TC}-ranlib
export CC=${TC}-gcc
export NM=${TC}-nm

Build and install into installdir

./configure --disable-shared --enable-static --without-ssl --host=${TC} --prefix=/usr/local/${TC}/${TC}
make install

The above will build a statically linked curl binary for the Synology, and put the binary in the ‘bin’ folder indicated by the path specified with –prefix.

The final step is to copy the ‘curl’ binary over to the Synology (not to /bin yet) and test it, use “–version” to check that the binary supports FTP and the other by Synology omitted protocols:

./curl --version
curl 7.79.1 (arm-unknown-linux-gnueabihf) libcurl/7.79.1
Release-Date: 2021-09-22
Protocols: dict file ftp gopher http imap mqtt pop3 rtsp smtp telnet tftp
Features: alt-svc AsynchDNS IPv6 Largefile UnixSockets

If everything seems ok, copy the file to /bin and give it another name:

cp -p curl /bin/curl.ftp

If it complains about different version of curl and libcurl, you failed somewhere when trying to link the correct libcurl statically.

Most useful sources for this article:

Apollo Accelerators – Vampire

Apollo Core (68080)
Apollo Forum
Apollo Accelerators
Apollo Accelerators Wiki: Latest core (500, 600, 1200) | Installing Kickstarts
Vampire 500 V2: Part 1Part 2 (Epsilon’s Amiga Blog)
Checkmate A1500 Plus with Vampire 500V2 (Epsilon’s Amiga Blog)

The Complete Amiga 500 Vampire V500 V2+ Installation Guide (Amitopia)

My Vampire Card has arrived! (Lyonsden Blog)
Installing the Vampire V500 V2+ in my Amiga 500 (Lyonsden Blog)

AmiKit XE for Vampire V2 (AmiKit XE changelog) (Vampire PCB maker)
GOLD 3 Alpha
Quartus Prime (for flashing Vampire using USB Blaster)


Amiga Vampire CoffinOs – Quick setup and fun (Cotter’s Stuff)

Apollo Vampire – Emulation or Amiga AAA Salvation (Stephen Jones)

Episode 79 68080 Vampire install Amiga 2000 (Chris Edwards)

Amiga 500 Plus & Vampire 500 V2 + Follow Up (Dave’s Game Room)

8/16/2020 Demo of new Apollo OS with Manuel Jesus of Apollo Team, Tiny Bobble & EPIC Unboxing (Amiga Bill)