JottaCloud secrets

I dug into the sqlite databases used by the JottaCloud client (and branded ones like Elgiganten) and found something that can be useful for other diggers…

This documentation is for the windows version of the client. The path to the database files and the path formats within the databases will differ for the client for other OSes.

Preparing

This method works for finding the location on the windows version:
Open the client interface, go to settings, then under the “General” tab, you will find a button that opens the log file location:

A window with the location ‘C:\Users\{myuser}\AppData\Roaming\Jotta\JottaWorld\log’ will be opened. Go to the parent directory, and there you will find the ‘db’ directory.

Keep this location open and QUIT the Jotta client (from the taskbar or any other effective method)

Copy the ‘db’ (or its parent ‘JottaWorld’) folder to a work- (or backup) location. NEVER do anything without having a backup copy of the ‘db’ folder, or even the whole ‘JottaWorld’ (parent) folder in case something goes wrong.

Examining the databases

From here, I will be examining each of the databases (.db files) and go through what I’ve found out. I will use the sqlite3 client supplied by microsoft-invented Ubuntu, the alternative is (on windows) to use a native sqlite3 client the same way, or just copy the ‘JottaWorld’ or ‘db’ directory to a computer with Linux (or any other real operating system) installed.

Basic sqlite3 usage

To open the database in sqlite3, simply use the sqlite3 command followed by the database name:

sqlite3 c.db

To show all tables in a database:

.tables

To show the table layout:

.schema {table name}

Select and update statements works basically as in other SQL clients.

c.db (outside the ‘db’ folder)

An empty database with a single table ‘c’, defined as:

CREATE TABLE c (id INTEGER PRIMARY KEY ASC AUTOINCREMENT,type integer, time integer, size integer, attempts integer, checksum string, path string, known );

The use of it is for me unknown (as the table is empty in my db).
This database was last changed almost two years before I stopped the Jotta client.

dl.db

Contains only one table ‘requests’ defined as

CREATE TABLE requests (id integer primary key autoincrement, callerid integer, localpath, remotepath, created integer, modified integer, revision integer, size integer, checksum varchar(32), queue integer, state integer, attempts integer, flags integer );

The use of it is for me unknown (as the table is empty in my db).
This database was last changed a week before I stopped the Jotta client.

dlsq.db

Database for the Jotta Sync folder. This folder is by default synced in full on all computers set up against the same Jotta account. There is no selective sync or OneDrive-like on-demand sync in Jotta, the only option is to completely disable the sync folder on the “Sync” tab in the settings. The sync folder location can be changed there too.

Tables:

jwt_blockingevents

(empty)

jwt_files

Information about all files

jwt_folders

Information about all folders

jwt_queuedfiles

Files checksummed and queued for transfer

jwt_shares

Shared files and folders within the sync folder

jwt_folders
The table is defined as:

CREATE TABLE jwt_folders (jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT, jwc_stateid, jwc_remotepath, jwc_remotehash, jwc_localpath, jwc_localhash, jwc_basepath, jwc_relativepath, jwc_folderhash , jwc_state, jwc_parent, jwc_newpath);
jwc_id

Folder id, used in the jwc_parent column and in jwc_files

jwc_stateid

empty on the data I have

jwc_remotepath

Path to the folder at Jotta, starting with ‘/{Jotta user name}/Jotta/Sync/’

jwc_remotehash

md5sum of the folder (?) a folder cannot be hashed

jwc_localpath

The full local path to the folder

jwc_localhash

md5sum of the folder (?) a folder cannot be hashed

jwc_basepath

empty on the data I have

jwc_relativepath

Path relative to the Sync folder location, empty on many of the entries

jwc_folderhash

empty on the data I have

jwc_state

State as cleartext ‘Updated’ if all files are synced

jwc_parent

id (jwc_id) of parent folder

jwc_newpath

empty on the data I have

jwt_files
The table is defined as:

CREATE TABLE jwt_files (jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT, jwc_remotepath, jwc_remotesize INTEGER, jwc_remotehash, jwc_localpath, jwc_localsize INTEGER, jwc_localhash, jwc_relativepath, jwc_created INTEGER, jwc_modified INTEGER, jwc_updated INTEGER, jwc_status, jwc_checksum, jwc_state, jwc_uuid, jwc_revision , jwc_folderid, jwc_newpath);
jwc_id

File id

jwc_remotepath

Path to the file at Jotta, starting with ‘/{Jotta user name}/Jotta/Sync/’

jwc_remotesize

File size on the remote end (should match localsize)

jwc_remotehash

md5sum of something at the remote end

jwc_localpath

The full local path to the file

jwc_localsize

File size on the local side (should match remotesize)

jwc_localhash

md5sum of something at the local side

jwc_relativepath

Path relative to the remote location, empty on many of the entries

jwc_created

timestamp of file creation

jwc_modified

timestamp of file modification

jwc_updated

zero on all my files

jwc_status

empty on the data I have

jwc_checksum

file md5 checksum

jwc_state

either ‘UpdatedFileState’ or ‘MovingFileState’ (used on renamed files, see ‘jwc_newpath’)

jwc_uuid

don’t know, ‘{00000000-0000-0000-0000-000000000000}’ on most files

jwc_revision

0, 1 or 11 on all my files

jwc_folderid

id (jwc_id from jwt_folders) of containing folder

jwc_newpath

New local name of a file renamed because of an upload error

jwt_queuedfiles
The table is defined as:

CREATE TABLE jwt_queuedfiles (jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT, jwc_remotepath, jwc_remotesize INTEGER, jwc_localpath, jwc_localsize INTEGER, jwc_relativepath, jwc_created INTEGER, jwc_modified INTEGER, jwc_status, jwc_checksum, jwc_revision INTEGER, jwc_queueid, jwc_type, jwc_hash , jwc_folderid);

It was empty in my current copy of the database, but it should be more or less like jwt_files (used only temporarily).

jwt_shares
The table is defined as:

CREATE TABLE jwt_shares (jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT, jwc_shareid, jwc_localpath, jwc_remotepath, jwc_owner, jwc_members );

Mostly self-explanatory, except for the two fields I’m unable to explain 🙂
jwc_shareid is in the form of jwc_uuid given above, jwc_owner is probably some secret string about my user (at Jotta) that I’m not supposed to share. It’s an 24 character alphanumeric string.

jobs.db

Contains only one table ‘jobs’ defined as

CREATE TABLE jobs (id integer primary key autoincrement, status integer, uri, name, path, databasepath, files integer, bytes integer );

The use of it is for me unknown (as the table is empty in my db).
This database file was last changed almost a year before I stopped the client.

mm.db

Backup folders. This is the only table I have made manual changes to (I made the listed folder name in the GUI more obvious on some entries). Never change anything without having a backup, and never change anything while the client is running.

Tables:

backup_schedule

The backup schedule (Schedule tab in settings)

backup_schedule_copy

Backup copy of the backup schedule

excludes

Files and folders excluded from backup

excludes_copy

Internal backup copy of the excludes table

mountpoints

All backup folders set in the client

backup_schedule and backup_schedule_copy
The backup schedule in settings seems to be a very simplified one. By modifying the database it looks like they prepared to allow for different backup time settings every day (I don’t know if it works).
The table is defined as:

CREATE TABLE backup_schedule(id INTEGER PRIMARY KEY, mountpoint INTEGER, start_day TEXT, start_hour INTEGER, start_minute INTEGER, end_day TEXT, end_hour INTEGER, end_minute INTEGER);

All self-explanatory except “mountpoint”, which is set to “-1” when I create a schedule. If the schedule is set to any of the multi-day variants (“weekends”,”weekdays”,”everyday”) there will be multiple entries in the database, one for each day:

sqlite> select * from backup_schedule;
1|-1|Monday|2|0|Monday|7|0
2|-1|Sunday|2|0|Sunday|7|0
3|-1|Saturday|2|0|Saturday|7|0
4|-1|Wednesday|2|0|Wednesday|7|0
5|-1|Tuesday|2|0|Tuesday|7|0
6|-1|Friday|2|0|Friday|7|0
7|-1|Thursday|2|0|Thursday|7|0
sqlite> select * from backup_schedule;
1|-1|Sunday|2|0|Sunday|7|0
2|-1|Saturday|2|0|Saturday|7|0
sqlite>

My guess about the ‘mountpoint’ column (which is set to “-1” by the schedule settings in the client) is that it refers to the ‘mountpoints’ table, so theoretically it should be possible to create separate schedules for every one of the mountpoints by directly entering them into the database…
The ‘backup_schedule_copy’ table contains the schedule before making changes through the client.

excludes and excludes_copy
All files and folders that are excluded by the backup. This also includes the system and hidden files and folders that are not backed up. From the client settings, it is possible to include hidden files and folders.
The table is defined as:

CREATE TABLE excludes(id INTEGER PRIMARY KEY, mountpoint INTEGER, pattern VARCHAR(1024));

Not much to explain here. ‘mountpoint’ is set to ‘-1’, and I find no possible use for it to match an entry in the ‘mountpoints’ table. ‘pattern’ allows for simple pattern matching (*) for the full local path of a file or folder to exclude from backup.

mountpoints
This table contains all the backup folders defined in the client.
The table is defined as:

CREATE TABLE mountpoints(jwc_id INTEGER PRIMARY KEY ASC AUTOINCREMENT,jwc_name,jwc_path,jwc_device,jwc_description,jwc_status,jwc_location,jwc_type,jwc_ip,jwc_suspended );
jwc_name

Name displayed in the client

jwc_path

The path for the folder to backup

jwc_device

Computer name (for the Jotta side ?)

jwc_description

Computer name

jwc_status

Status, can be any of the following:
Scanning
ScheduleWaiting
AllGood
Uploading
QueuedForScan

jwc_location

‘Local’ or ‘Remote’

jwc_type

Zero on all my entries

jwc_ip

127.0.0.1 for local paths, empty for remote

jwc_suspended

“Suspended” for paused backups, blank otherwise

I find the content of jwc_status to more often be incorrect than correct, while writing this it is scanning one of my network drives, but in the database it says “Uploading”. Many entries are “Up to date” according to the client, but listed as different things in the db.

reque_c

This sqlite3 database file is without its extension (.db).
Contains a table with queued uploads (scanned files, on queue for checksumming):

CREATE TABLE uploads (id INTEGER PRIMARY KEY, tag INTEGER, blob BLOB );

blob contains array of file information

reque_u

Another sqlite3 database file is without the extension (.db)
Contains a table with queued uploads (checksummed, waiting for upload slot):

CREATE TABLE uploads (id INTEGER PRIMARY KEY, tag INTEGER, blob BLOB );

blob contains array of file information

sm.db

Contains information on all backed up files
Tables:

files

Information for all backed up files

folders

Information for all backed up folders

mountpoint_status

(empty)

folders
The table is defined as:

CREATE TABLE folders (id integer primary key autoincrement, path text UNIQUE, state integer, parent integer, mountpoint integer, checksum varchar(20));
path

Full local path to the folder

state

Contains a value of 1,2,5,6 or 7 in my database, have no idea of what it represents

parent

Id of parent folder (in this table)

mountpoint

mountpoint id in mm.db

checksum

md5 checksum on something (a folder cannot be checksummed)

files
The table is defined as:

CREATE TABLE files (id integer primary key autoincrement, path text UNIQUE, parent integer, size integer, modified integer, created integer, checksum varchar(16), state integer, mountpoint integer);
path

The full path of the backed up file

parent

the id of the containing folder (in folders table)

size

file size

modified

timestamp of modification

created

timestamp of creation

checksum

md5 checksum of file

state

Contains a value of 6 or 7 in my database, have no idea of what it represents

mountpoint

mountpoint id in mm.db

So why all this trouble analyzing the database ?

I wanted an easy way of finding my files by its md5 checksum, that was one of the reasons. Another thing (not solved yet) is that I want to find out the way of recreating the share link for a specific file or folder within a public shared folder on my Jotta account (this without going through the web interface, I mean, it’s already shared inside an accessible folder).

Odd things noticed are that there are md5 checksums for folders, and three different ones in the sync folder (the jwt_files and jwt_folders tables in the dlsq.db), but for the individual files there is only the files’ real md5 checksum.

Anyway… that investigation will continue some other day…

Xpenology – Synology DSM on non-Synology hardware

This bunch of resources need to be reorganized some day.. I just made it to close off a rotting web browser window..

General

https://xpenology.org/
https://xpenology.org/installation/
https://xpenology.club/category/tutorials/
https://xpenology.com/forum/topic/9394-installation-faq/?tab=comments#comment-81101
https://xpenology.com/forum/topic/9392-general-faq/?tab=comments#comment-82390

Specific hardware

https://xpenology.com/forum/topic/20314-buffalo-terastation-ts5800d/
https://en.wikipedia.org/wiki/Haswell_(microarchitecture)

Misc

https://xpenology.com/forum/topic/24864-transcoding-without-a-valid-serial-number/
https://xpenology.com/forum/topic/38939-serial-number-for-ds918/
https://xpenogen.github.io/serial_generator/index.html

https://xpenology.com/forum/topic/29872-tutorial-mount-boot-stick-partitions-in-windows-edit-grubcfg-add-extralzma/
https://xpenology.com/forum/topic/12422-xpenology-tool-for-windows-x64/page/5/

Unsorted

https://xpenology.com/forum/topic/12952-dsm-62-loader/page/75/
https://xpenology.com/forum/topic/28183-running-623-on-esxi-synoboot-is-broken-fix-available/
https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/
https://xpenology.com/forum/topic/7294-links-to-dsm-and-critical-updates/

Synology DSM archive

https://archive.synology.com/download/Os/DSM/6.2.3-25426-3

Errors

https://xpenology.com/forum/topic/14114-usb-stick-no-vidpid/
https://xpenology.com/forum/topic/9853-dsm_ds3617xs-installation-error-the-file-is-probably-corrupt-13/
https://xpenology.com/forum/topic/13253-error-21-problem/

Synology DSM 7 and broken FTP support in curl

I recently updated my DS1517 to DSM 7 and noticed that FTP support has been left out in curl/libcurl they included. This is how I compiled the latest version of curl, including support for all omitted protocols. It still needs more fixing, since I was not able to compile it with SSL support (so no https, which is included in curl in DSM 7).

My guide is for the Synology DS1517 (ARM). You have to download the correct files for your NAS and set the correct options (paths and names) for the compile tools if you have another model.

The problem

For some unknown reason, Synology decided to drop support for all protocols except http and https in the included curl binary with DSM7:

root@DS1517:~# curl --version
curl 7.75.0 (arm-unknown-linux-gnueabi) libcurl/7.75.0 OpenSSL/1.1.1k zlib/1.2.11 c-ares/1.14.0 nghttp2/1.41.0
Release-Date: 2021-02-03
Protocols: http https
Features: alt-svc AsynchDNS Debug HTTP2 HTTPS-proxy IPv6 Largefile libz NTLM NTLM_WB SSL TrackMemory UnixSockets
root@DS1517:~#

The outcome of following this guide:

curl.ftp --version
curl 7.79.1 (arm-unknown-linux-gnueabihf) libcurl/7.79.1
Release-Date: 2021-09-22
Protocols: dict file ftp gopher http imap mqtt pop3 rtsp smtp telnet tftp
Features: alt-svc AsynchDNS IPv6 Largefile UnixSockets

As seen and mentioned above, I was not able to enable SSL in my compiled version, so this will not replace curl included in DSM7, but could be installed in /bin under another name as it has the libcurl statically linked in the binary.

What you need to compile for the Synology

The first thing you need is a Linux installation as a development system containing the Synology toolkit for cross-compiling.
A fairly standard installation will do, at least mine did (but that also includes PHP, MySQL, Apache and other useful stuff). This is preferably done on a virtual machine, but you can of course use a physical computer for it.

You also need the Synology DSM toolchain for the CPU in the NAS you want to compile for. I found the links in the Synology Developer Guide (beta).
There is also supposed to be a online version of the guide, but at least for me, all the links within it were not working.

Get the toolchain
To find out which toolchain you need, run the command ‘uname -a’:

root@DS1517:~# uname -a
Linux DS1517 3.10.108 #41890 SMP Thu Jul 15 03:42:22 CST 2021 armv7l GNU/Linux synology_alpine_ds1517

As seen above, the DS1517 reports “synology_alpine_ds1517”, so you should look for the “alpine” versions of downloads for this NAS.
Get the correct toolchain for your NAS from Synology toolkit downloads. For the DS1517, I downloaded the file “alpine-gcc472_glibc215_alpine-GPL.txz”:
Download and unpack on the development system:

wget "https://global.download.synology.com/download/ToolChain/toolchain/7.0-41890/Annapurna%20Alpine%20Linux%203.10.108/alpine-gcc472_glibc215_alpine-GPL.txz"
tar xJf alpine-gcc472_glibc215_alpine-GPL.txz -C /usr/local/

The above will download and unpack the toolchain to the /usr/local/arm-linux-gnueabihf folder. This contains Linux executables for the GNU compilers (gcc, g++ etc).

arm-linux-gnueabihf-gcc: No such file or directory
Now, whenever you try to execute any of the commands extracted to the bin directory, you will probably get the “No such file or directory” error (even with the correct path and filename and the file is executable).
If you examine the executable files using the ‘file’ command you will discover that these are 32-bit executables:

root@ubu-01:~# file /usr/local/arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc-4.7.2
/usr/local/arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc-4.7.2: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.15, stripped

I found the solution to the problem here:
arm-linux-gnueabihf-gcc: No such file or directory
In short:

dpkg --add-architecture i386
apt-get update
apt-get install git build-essential fakeroot
apt-get install gcc-multilib
apt-get install zlib1g:i386

Now that we have the cross-compiling toolkit working, let’s continue with curl.

Cross-compile curl for Synology NAS

The current version at the time I wrote the guide was 7.79.1, so I download the source and then uncompress it:

wget https://curl.se/download/curl-7.79.1.tar.gz
tar xfz curl-7.79.1.tar.gz
cd curl-7.79.1

Set some variables and GCC options

export TC="arm-linux-gnueabihf"
export PATH=$PATH:/usr/local/${TC}/bin
export CPPFLAGS="-I/usr/local/${TC}/${TC}/include"
export AR=${TC}-ar
export AS=${TC}-as
export LD=${TC}-ld
export RANLIB=${TC}-ranlib
export CC=${TC}-gcc
export NM=${TC}-nm

Build and install into installdir

./configure --disable-shared --enable-static --without-ssl --host=${TC} --prefix=/usr/local/${TC}/${TC}
make
make install

The above will build a statically linked curl binary for the Synology, and put the binary in the ‘bin’ folder indicated by the path specified with –prefix.

The final step is to copy the ‘curl’ binary over to the Synology (not to /bin yet) and test it, use “–version” to check that the binary supports FTP and the other by Synology omitted protocols:

./curl --version
curl 7.79.1 (arm-unknown-linux-gnueabihf) libcurl/7.79.1
Release-Date: 2021-09-22
Protocols: dict file ftp gopher http imap mqtt pop3 rtsp smtp telnet tftp
Features: alt-svc AsynchDNS IPv6 Largefile UnixSockets

If everything seems ok, copy the file to /bin and give it another name:

cp -p curl /bin/curl.ftp

If it complains about different version of curl and libcurl, you failed somewhere when trying to link the correct libcurl statically.

Most useful sources for this article:

https://global.download.synology.com/download/Document/Software/DeveloperGuide/Firmware/DSM/7.0/enu/DSM_Developer_Guide_7_0_Beta.pdf
https://thalib.github.io/2017/02/17/32bit-no-such-a-file-or-directory/
https://curl.se/docs/install.html

Buffalo LS-QVL root access

https://forums.buffalotech.com/index.php?topic=37677.0

Get the updated acp_commander.jar from Github
https://github.com/1000001101000/acp-commander

All needed files are in place, just needs some tweaking.. As with everything-Buffalo, I don’t know if it survives a reboot.

java -jar acp_commander.jar -t 192.168.0.10 -pw AdminPassword -c "(echo newrootpass;echo newrootpass)|passwd"
java -jar acp_commander.jar -t 192.168.0.10 -pw AdminPassword -c "sed -i 's/PermitRootLogin/#PermitRootLogin/g' /etc/sshd_config"
java -jar acp_commander.jar -t 192.168.0.10 -pw AdminPassword -c "echo 'PermitRootLogin yes' >>/etc/sshd_config"
java -jar acp_commander.jar -t 192.168.0.10 -pw AdminPassword -c "sed -i 's/root/rooot/g' /etc/ftpusers"

or

java -jar acp_commander.jar -t 192.168.0.10 -pw AdminPassword -s

then execute the same commands in the shell:

(echo newrootpass;echo newrootpass)|passwd
sed -i 's/PermitRootLogin/#PermitRootLogin/g' /etc/sshd_config
echo "PermitRootLogin yes" >>/etc/sshd_config
sed -i 's/root/rooot/g' /etc/ftpusers

Error message “pam_listfile(sshd:auth): Refused user root for service sshd” in /var/log/messages during my first login attempt
The last command above is there because root login was denied using this file (/etc/ftpusers) as a list of users to deny access in /etc/pam.d/login (or /etc/pam.d/sshd).
I found the hint to theck the pam.d configuration here (after checking the logs for any reason to the login error):
https://docs.jdcloud.com/en/virtual-machines/ssh-login-error-service-sshd

Since I wrote this note, I went over to installing Debian on my LS-QVL, so I can no longer verify each of the steps taken to gain root access.

Inner secrets of Synology Hybrid RAID (SHR) – Part 2

Changing the first disk and my case to Synology support

Now it was time to replace the first disk. As I assumed this would never go wrong (!) and did not plan to document the upgrade, I did not take out any information about the partitions, mdraids and volumes during this first disk swap.

The instructions from Synology are quite good for this (until something breaks down):
Replace Drives to Expand Storage Capacity

Basically it says: replace the disks one by one, start with the smallest and wait until completion before replacing the next.

For the first disk swap, I actually shut down my DS1517 before replacing the disk (many models, including DS1517, supports hot swapping the disks). When the disk was replaced and I powered up the DS1517, and as expected I got the “RAID degraded” beep.
Did a check that the new drive was recognized, and then started the repair of the storage pool. As this will usually take many hours, and this was done in the evening, I have no idea of the actual time spent for repairing (rebuilding) the pool. This was about 90% finished when I stopped looking at the status around midnight that day.

The next day, I see that it had “restarted” (lower percentage than yesterday), but this is actually the next step that is initiated directly after repairing the pool. It’s called “reshaping” and during that process other mdraids are changed and adjusted (if possible) against the new disk.

Changes during the first disk swap

These are only assumptions, because I did not take enough info in between swapping the disk and until about a third into reshaping.

At the point of changing the first disk (refer to the previous part of my article), my storage pool/volume consisted of two mdraids joined together:
md2: RAID 5 of sda5, sdb5, sdc5, sdd5, sde5: total size about 11.7TB
md3: RAID 1 of sdd6, sde6: total size of about 4.8TB

When I pulled the first drive (3TB) and replaced it with a 14TB drive, I assume the partition table on that disk was created like this (status pulled from the mid of reshaping after first disk swap, so I’m pretty sure this is correct):

/dev/sda1                  2048         4982527         4980480  fd
/dev/sda2               4982528         9176831         4194304  fd
/dev/sda5               9453280      5860326239      5850872960  fd
/dev/sda6            5860342336     15627846239      9767503904  fd

sda5 was matched up with the size of the old sda5 (and the ‘5’-partitions on the other disks)
sda6 was also created in either the step before rebuild, or right before reshaping (this partition match the size with the ‘6’-partitions on sdd and sde.
Because the (14T) disk is larger than the previous largest (8TB) one, there are some non-partitioned wasted space (about 5.8TB which will come into use after the next disk swap).

Reshaping

Again, I have not taken any full status dumps so that my information can be confirmed, but this is what I see afterwards, and adding my guesses to it because of the better logging of later disk swaps.

After the storage pool was repaired, reshaping started automatically. During this step, the RAID1 consisting of sdd6 and sde6 (md3) were changed into RAID5 consisting of sda6, sdd6 and sde6.

At about 30% into the reshaping phase, my NAS went unresponsive (disconnected both shell and GUI), and I had to wait all day until I came home and did a hard reset on it and hoped everything went well..

In the meantime, I logged a case to the Synology support (see “Part 2b” of this article). They were not of any direct help, and the hard reset did take the NAS back to continuing the reshaping process.

Inner secrets of Synology Hybrid RAID (SHR) – Part 1

Inner workings of Synology Hybrid RAID

Maybe a too much promising title for this post, but this is my guesswork on how SHR works when replacing drives. If anyone have a spare DS1517 (or later device, with at least 4 slots) to donate, I will investigate this further, cannot afford to do it on my primary NAS because of risk of loosing data – and now even not possible without upgrading the disks again to larger ones).

I will also post here my case (more or less in full) sent to Synology when the NAS got unresponsive (crashed) during the rebuild/reshaping process.

What is Synology Hyrbrid RAID ?

This is in fact the only thing Synology themselves have briefly explained in their documentation:
What is Synology Hybrid RAID (SHR)

My short explanation is that it is a software RAID that is able to maximize the utilization of mixed sized hard drives. For simplicity, Synology illustrates this with drives varying of 500GB to 2TB (in 500GB increments), possibly fooling some people to think that the disks are always split into 500GB partitions.

My findings while expanding my DS1517 (from 3TB, 3TB, 3TB, 8TB, 8TB to all 14TB) is that the remaining space of the drives are splitted in as few parts as possible to obtain the maximum available space (after setting aside about 2.5GB for the DSM (operating system) and 2GB for swap).

Replacing disks and rebuilding the RAID

Before I replaced the first disk, I actually forgot to view and save down the info about the partitions, mdraid volumes and logical volumes (I might have that somewhere else, but I will not look for it now). Based on how it looked after the first disk had been replaced, and the rebuild was done (in the process of reshaping) it should have been something like this:

# sfdisk -l
/dev/sda1                  2048         4982527         4980480  83
/dev/sda2               4982528         9176831         4194304  82
/dev/sda5               9453280      5860326239      5850872960  fd

/dev/sdb1                  2048         4982527         4980480  83
/dev/sdb2               4982528         9176831         4194304  82
/dev/sdb5               9453280      5860326239      5850872960  fd

/dev/sdc1                  2048         4982527         4980480  83
/dev/sdc2               4982528         9176831         4194304  82
/dev/sdc5               9453280      5860326239      5850872960  fd

/dev/sdd1                  2048         4982527         4980480  fd
/dev/sdd2               4982528         9176831         4194304  fd
/dev/sdd5               9453280      5860326239      5850872960  fd
/dev/sdd6            5860342336     15627846239      9767503904  fd

/dev/sde1                  2048         4982527         4980480  fd
/dev/sde2               4982528         9176831         4194304  fd
/dev/sde5               9453280      5860326239      5850872960  fd
/dev/sde6            5860342336     15627846239      9767503904  fd

Note: The partition types for sd[a-c][1-2] seems incorrect as these where changed to “fd” later on during the process, or it might have been something changed by Synology on later DSM versions (but not at the point of updating DSM).

Partitions 1-2 are the system and swap partitions on all the drives, sized 2.5GB respectively 2GB.
Partition 5 is a part of the storage space available in the volume on the NAS. In this case it is about 2.9TB in size (the maximum available on the smallest disks).
Partition 6 is the second part of the total storage space. At this time those partitions are about 4.8TB in size.

mdraid volumes

Out of the partitions above, the Synology creates these mdraid volumes:
md0: RAID 1 of sda1, sdb1, sdc1, sdd1, sde1: total size 2.5GB used for DSM
md1: RAID 1 of sda1, sdb2, sdc2, sdd2, sde2: total size 2GB used for swap
md2: RAID 5 of sda5, sdb5, sdc5, sdd5, sde5: total size about 11.7TB
md3: RAID 1 of sdd6, sde6: total size of about 4.8TB

LVM logical disk

md2 and md3 are joined together into a logical disk using LVM, which gives about 16.5TB space in total for the storage volume on the NAS (Synology DSM says 15.5TB, but the difference is only because of how I estimate the space and how Synology does – I just take the block count, divide by two, then use a one decimal precision – which is adequate enough for this description).

DSM Storage Manager before replacing the first disk

… to be continued in part 2 …

Synology NAS – PHP

Enabling extensions in PHP CLI

https://blackswan.ch/archives/594

Become root
Find out which configuration file is used:
php --ini

I get:
Configuration File (php.ini) Path: /usr/local/etc/php70
Loaded Configuration File: /usr/local/etc/php70/php.ini

Edit the php.ini file, check/change extensions dir to where it is located. If PHP was installed through DSM it should be something like ‘/volume1/\@appstore/PHP7.0’
extension_dir = "/volume1/@appstore/PHP7.0/usr/local/lib/php70/modules/"

Enable the extensions you wish to use:
extension = mysqli.so
extension = phar.so
extension = openssl.so
extension = zip.so
extension = curl.so

Installing Composer

Composer documentationhttps://medium.com/unhandled-code/installing-composer-on-synology-6-1-4-eebd1a1c4891

Become root
Enable the extensions as above (phar, openssl and zip are needed for Composer).

Get and install Composer:
cd /usr/local/bin
curl -s http://getcomposer.org/installer | php70

Create shortcut script ‘composer’ in /usr/local/bin:
#!/bin/bash
php70 /usr/local/bin/composer.phar $*

Set permissions:
chmod --reference=composer.phar composer

Test:
composer --version

Synology NAS – Add disk and include in md0+md1

Adding new disks, including in mirroring of system partitions (md0 and md1)

GNU parted documentation

  1. Add the new disks as hot spare, then remove them (will create the disklabel, otherwise just do this using parted)
  2. Check the partition table of a disk already used for system and swap. Find it by checking mdstat (cat /proc/mdstat)
    # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md1 : active raid1 sdd2[3] sdb2[1] sdc2[2] sda2[0]
    2097088 blocks [5/4] [UUUU_]
    
    md0 : active raid1 sdd1[3] sdb1[1] sda1[0] sdc1[2]
    2490176 blocks [5/4] [UUUU_]
    
    unused devices:
    
    # parted /dev/sda
    (parted) unit s
    (parted) p
    Model: ATA ST3000DM001-1CH1 (scsi)
    Disk /dev/sda: 5860533168s
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number Start End Size File system Name Flags
    1 2048s 4982527s 4980480s ext4 raid
    2 4982528s 9176831s 4194304s linux-swap(v1) raid
    
    (parted) q
    
  3. Run parted on the new disk
    # parted /dev/sde
    (parted) unit s
    (parted) p
    Model: ATA ST3000DM001-1CH1 (scsi)
    Disk /dev/sda: 5860533168s
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number Start End Size File system Name Flags
    
    (parted) mkpart system ext4 2048 4982527
    (parted) mkpart swap linux-swap 4982528 9176831
    (parted) p
    ...
    Number Start End Size File system Name Flags
    1 2048s 4982527s 4980480s ext4 system
    2 4982528s 9176831s 4194304s linux-swap(v1) swap
    (parted) q
    
  4. Add partitions to system and swap
    # mdadm --add /dev/md0 /dev/sde1
    # mdadm --add /dev/md1 /dev/sde2
    
  5. Check rebuild status using ‘cat /proc/mdstat’
  6. Final result should be something like:
    
    # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md1 : active raid1 sde2[4] sdd2[3] sdb2[1] sdc2[2] sda2[0]
    2097088 blocks [5/5] [UUUUU]
    
    md0 : active raid1 sde1[4] sdd1[3] sdb1[1] sda1[0] sdc1[2]
    2490176 blocks [5/5] [UUUUU]
    
    unused devices: