The pool configuration file is the file controlling the ‘server’ (proxy) for PHP. The ‘listen’ line in this file is telling where proxy requests should be accepted.
‘user’ and ‘group’ tells under which user account the process should be run. ‘listen.owner’, ‘listen.group’ and ‘listen.mode’ can be set to limit access to the proxy (by other users/sites).
Sample PHP FPM pool configuration
For each version of PHP that should be available, create a pool configuration file (in ‘/etc/php/<version>/fpm/pool.d/’) like:
(change the “listen =” line, so it matches the PHP version you wish to use, then change to listen to the same socket in the virtualhost configuration or in the override segment in .htaccess)
In the virtualhost configuration, that same listening socket must be used (the owner of the httpd process must have rights to talk to the proxy, so it could be set up as a per-user or per-group setting depending on how it was set up in the pool configuration).
Sample virtualhost, allows override of PHP version and access to /fpm-status
When running both Apache httpd and PHP as a specific user, the files on the web site only need to have user read/write(when needed) access if they are owned by the user running the processes.
To make a permanent change to the umask used by PHP, add ‘UMask=0077’ to the ‘Service’ section of each PHP FPM service:
systemctl edit php8.2-fpm.service
Add:
[Service]
UMask=0077
Then reload systemctl daemon and restart the fpm service:
Yesterday I noticed that the LEDs were blinking amber on one of my LS220D boxes. My initial thought was that a disk had failed (it’s just a backup of my backup). Checked with the “NAS Navigator” application, and it stated that it was unable to mount the data array (md10) (I have not logged the full error message here, as I continued the attempts to solve the situation).
dmesg output
I logged in as root (see other posts) to check what had gone wrong.
‘dmesg’ revealed that a disk had been lost during smartctl (multiple repeats of the below content):
As I was able to mount the partition, I did a file system check after unmounting it:
[root@BUFFALO-4 ~]# xfs_repair /dev/md10
Phase 1 - find and verify superblock...
Not enough RAM available for repair to enable prefetching.
This will be _slow_.
You need at least 1227MB RAM to run with prefetching enabled.
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
...
- agno = 30
- agno = 31
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
...
- agno = 30
- agno = 31
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
doubling cache size to 1024
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done
[root@BUFFALO-4 ~]# mount /dev/md10 /mnt/array1
[root@BUFFALO-4 ~]# ls /mnt/array1/
backup/ buffalo_fix.sh* share/ spool/
Another reboot, then checking to find out that md10 was still not mounted.
The error in NAS Navigator is: “E14:RAID array 1 could not be mounted. (2022/07/14 12:36:18)”
Time to check ‘dmesg’ again:
md/raid1:md2: active with 1 out of 2 mirrors
md2: detected capacity change from 0 to 1023410176
md: md1 stopped.
md: bind
md/raid1:md1: active with 1 out of 2 mirrors
md1: detected capacity change from 0 to 5114888192
md: md0 stopped.
md: bind
md/raid1:md0: active with 1 out of 2 mirrors
md0: detected capacity change from 0 to 1023868928
md0: unknown partition table
kjournald starting. Commit interval 5 seconds
EXT3-fs (md0): using internal journal
EXT3-fs (md0): mounted filesystem with writeback data mode
md1: unknown partition table
kjournald starting. Commit interval 5 seconds
EXT3-fs (md1): using internal journal
EXT3-fs (md1): mounted filesystem with writeback data mode
kjournald starting. Commit interval 5 seconds
EXT3-fs (md1): using internal journal
EXT3-fs (md1): mounted filesystem with writeback data mode
md2: unknown partition table
Adding 999420k swap on /dev/md2. Priority:-1 extents:1 across:999420k
kjournald starting. Commit interval 5 seconds
EXT3-fs (md0): using internal journal
EXT3-fs (md0): mounted filesystem with writeback data mode
The above shows that md0, md1 and md2 went up, but are missing its mirror partition (this from /dev/sda that disappeared).
Further down in dmesg output
md: md10 stopped.
md: bind
md: bind
md/raid0:md10: md_size is 15565748224 sectors.
md: RAID0 configuration for md10 - 1 zone
md: zone0=[sda6/sdb6]
zone-offset= 0KB, device-offset= 0KB, size=7782874112KB
md10: detected capacity change from 0 to 7969663090688
md10: unknown partition table
XFS (md10): Mounting Filesystem
XFS (md10): Ending clean mount
XFS (md10): Quotacheck needed: Please wait.
XFS (md10): Quotacheck: Done.
udevd[3963]: starting version 174
md: cannot remove active disk sda6 from md10 ...
[root@BUFFALO-4 ~]# mount /dev/md10 /mnt/array1/
[root@BUFFALO-4 ~]# ls -l /mnt/array1/
total 4
drwxrwxrwx 3 root root 21 Dec 14 2019 backup/
-rwx------ 1 root root 571 Oct 14 2018 buffalo_fix.sh*
drwxrwxrwx 3 root root 91 Sep 16 2019 share/
drwxr-xr-x 2 root root 6 Oct 21 2016 spool/
What the h… “cannot remove active disk sda6 from md10”
Checking md raid status
[root@BUFFALO-4 ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md10 : active raid0 sda6[0] sdb6[1]
7782874112 blocks super 1.2 512k chunks
md0 : active raid1 sdb1[1]
999872 blocks [2/1] [_U]
md1 : active raid1 sdb2[1]
4995008 blocks super 1.2 [2/1] [_U]
md2 : active raid1 sdb5[1]
999424 blocks super 1.2 [2/1] [_U]
unused devices:
[root@BUFFALO-4 ~]# mdadm --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Fri Oct 21 15:58:46 2016
Raid Level : raid0
Array Size : 7782874112 (7422.33 GiB 7969.66 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Fri Oct 21 15:58:46 2016
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : LS220D896:10
UUID : 5ed0c596:60b32df6:9ac4cd3a:59c3ddbc
Events : 0
Number Major Minor RaidDevice State
0 8 6 0 active sync /dev/sda6
1 8 22 1 active sync /dev/sdb6
So here, md10 is fully working and md0, md1 and md2 are missing their second device. Simple to correct, just adding them back:
Some time later, sync was finished, and I rebooted again. Finally, after this reboot /dev/md10 is automatically mounted to /mnt/array1 again.
Problem solved 🙂
smartctl notes
The values of attributes 5, 197 and 198 should be zero for a healthy drive, so one disk in the NAS is actually failing, but the cause of the hiccup (disconnect) was a core dump by smatctl weekly scan.
At about 30% into the reshaping phase (after the first disk swap), my NAS went unresponsive (disconnected both shell and GUI), and I had to wait all day until I came home and did a hard reset on it and hoped everything went well..
In the meantime, I logged a case to the Synology support. They were not of any direct help, and the hard reset did take the NAS back to continuing the reshaping process.
My case with Synology support
==
2020-12-01 13:51:37
==
Replaced one of the smallest drives in my NAS yesterday (SHR) as a first step for later expansion (I will replace all drives with larger ones before expanding – if possible to delay any automatic expansion until then).
About 80% finished with rebuilding yesterday, but for some reason it started over after the first round.
Today about 30% finished when I lost the connection to the NAS (over ssh and the web interface). It does not auto-reboot and does not respond to ping.
To lessen the risk of data loss, what should my first step be ? Can I just pull the plug and hard-reboot the NAS with the current disks mounted (14TB, 3TB, 3TB, 8TB, 8TB in a SHR config), or is it better to replace or remove the disk that I recently replaced (in slot 1: 14TB in place of the previous still untouched 3TB) ?
What are the steps to getting the volume back online if it does not mount automatically ?
As the NAS is down, I am not able to upload any logs, but attached is the rebuild status before the crash.
==
2020-12-01 15:28:58
Synology response (besides the auto response “send us logs”)
Not useful at all, exactly what I did, “Mark” who replied did not read anything..
==
Hello,
Thank you for contacting Synology.
If you wish to replace a drive in your unit, please perform these steps one by one allowing for the repair to complete before replacing any further drives.
1. Pull out the drive in question.
2. Insert a replacement drive.
3. Proceed to the Storage Manager > Storage Pool > select the volume in question and click “Manage/Action”
4. Run through the wizard to repair the volume in question with the replacement drive.
5. Once complete, proceed to the Storage Manager > Volume and Configure/Edit the volume to configure the volume to have additional size.
Please see the link below for more help.
https://www.synology.com/en-uk/knowledgebase/DSM/help/DSM/StorageManager/storage_pool_expand_replace_disk
Please bare in mind that you benefit from the additional space from the drives you will need to replace at least 2 drives for larger ones in RAID 5/SHR or 3 drives in RAID6/SHR2.
You can see the type of RAID used via – DSM > Storage Manager > Storage Pool.
If you have any further questions please do not hesitate to get in touch.
Best Regards,
Mark
==
2020-12-01 16:02:14
My reply
==
Ok, so I restart the problem description then:
I did (yesterday):
0. Power down Synology
1. Pull out the drive in question.
2. Insert a replacement drive.
3. Proceed to the Storage Manager > Storage Pool > select the volume in question and click “Manage/Action”
4. Run through the wizard to repair the volume in question with the replacement drive.
THEN, today:
4b. Today about 30% finished when I lost the connection to the NAS (over ssh and the web interface). It does not auto-reboot and does not respond to ping.
SO what now ?
As the NAS is unresponsive I will never reach step 5:
To lessen the risk of data loss, what should my first step be ? Can I just pull the plug and hard-reboot the NAS with the current disks mounted (14TB, 3TB, 3TB, 8TB, 8TB in a SHR config), or is it better to replace or remove the disk that I recently replaced (in slot 1: 14TB in place of the previous still untouched 3TB) ?
What are the steps to getting the volume back online if it does not mount automatically ?
Also, is there an option to DELAY the expansion until all drives have been replaces, as you replied changeing the first drive will not expand the volume, but I’m not there yet since I’m stuck in a crash (unresponsive system)
==
2020-12-02 23:25:46
My reply on Synologys’ suggestion to collect logs using the support centre
==
How do I launch “Support Center” on the device when it is unresponsive (which was my initial question – what to do when it hangs in the middle of repairing/reshaping) ?
I forced it off and restarted and hoped for the best – reshaping continued and the second disk is now in reshaping mode.
My other question has not yet been answered:
Is it possible to delay the time consuming step of reshaping until all disks have been replaced ?
Initial configuration: 3TB 3TB 3TB 8TB 8TB
After replacement of the first disk: 14TB 3TB 3TB 8TB 8TB, after reshaping the first disk got a partition to match the 8TB disks.
After replacement of the second disk: 14TB 14TB 3TB 8TB 8TB, while reshaping again, now disk 1 and 2 looks similar with one partition matching the largest of the remaining 3TB disk, one matching the largest on the 8TB disks and the remainder (roughly about 6TB) the same on both 14TB disks.
When replacing the third 3TB disk, I assume the following would happen:
(14TB 14TB 14TB 8TB 8TB)
On the first and second disk, the (about) 3TB partition will be replaced with a partition to match the 8TB disks. Then the remainder (3 disks with 6TB unallocated space) will be used for another raid5 (after yet another reshape)
So my question again; is it possible to delay reshaping until I have had all the disks replaced. I understand that the “rebuild” is needed in between every replacement, but “reshape” should be needed only once.
I’m afraid you cannot delay or prevent this process, once it starts it needs to run until fruition.
I would suggest to leave this running for now, if the volume does crash fully in the mean time I can take a look at what we can do to recover the volume, but there is not much I can do currently I’m afraid.
If you have any further question please do not hesitate to get in touch.
This quick guide explains the steps needed to be able to run different versions of PHP on the same server (different virtual hosts or even different folders within the same site).
Prepare
If the ‘add-apt-repository’ command is missing, you need to install the package “software-properties-common” first:
apt install -y software-properties-common
Add the ondrej/php repository to your system:
add-apt-repository ppa:ondrej/php
apt update, upgrade:
apt update
apt upgrade
apt autoremove
Note: after these steps you will have both PHP 8.1 and 8.2 installed, and most likely PHP 8.1 active for Apache. When you follow the hint on removing packages no longer needed (php 8.1 packages), PHP 8.1 Apache will also stop working. This will be remedied in the section below, or by changing Apache to use PHP 8.2 instead:
a2enmod php8.2
Install Apache fastCGI module:
apt install -y libapache2-mod-fcgid
For each version of PHP
As of 1 Sep 2022, PHP versions 8.0 (php8.0) and 8.1 (php8.1) are those with active support and updates. PHP version 7.4 (php7.4) will continue to receive critical updates until 28 Nov 2022. See Supported PHP Versions.
For each version, also any custom configuration in php.ini has to be duplicated. In Ubuntu the php.ini files are located in the subfolders of /etc/php/x.x/ (one subfolder for each run environment, “apache2”, “cgi”, “cli”, “fpm”).
With the understanding of the above, here are the needed commands for installing the required and extra packages for PHP 7.4, 8.0, 8.1 and 8.2:
Configuring PHP version for virtual host or subfolder
Activate necessary Apache modules and restart Apache:
a2enmod actions fcgid alias proxy_fcgi
systemctl restart apache2
Use FilesMatch directive to set the PHP version:
FilesMatch is valid in both the virtualhost configuration and inside a Directory section.
To set PHP version globally for a virtual host, use it outside a Directory section.
The default PHP version can be set using ‘a2enmod php8.2’ (or any other version)
Check the configuration for errors:
apachectl configtest
If result is “Syntax OK”, restart Apache:
systemctl restart apache2
Overriding PHP version using .htaccess
For this to work, “AllowOverride FileInfo” must be present for the directory (or above) in which the .htaccess file will be used to set the PHP version.
For the default virtual host, the DocumentRoot is set to /var/www/html, so to allow PHP version to be set by .htaccess at that level or below, the following must be present in the vhost configuration:
When this has been set, FilesMatch and SetHandler (as described above) can be used within the .htaccess file. The .htaccess method have higher priority than what is set for the virtualhost, or the subfolder within the DocumentRoot of the virtual host.
Testing
Create a file named ‘i.php’ in the locations with the different PHP versions (can be different virtualhosts or folders)
<?php
phpversion();
?>
Access these locations on the virtualhosts or their directory locations to verify that they are using different PHP versions.
This is probably not the only way to do this, and just something I had to dig up to be able to control the piStorm from the console keyboard without being logged in.
After trying to find some built-in way of doing it, I ended up using ‘lirc’ (‘inputlircd’) to fetch the keystrokes and execute appropriate commands in the background. The guide is not intended to be complete, and it’s not even re-tested because of the trial-and-error attempt on getting this working the first time, and not taking any notes.
The marked lines in the partial content of “/etc/init.d/inputlirc” reveals that a file “/etc/default/inputlirc” is sourced.
Change startup parameters for inputlircd
“/etc/default/inputlirc” contains parameters for running inputlircd, including the input device to capture events from and the parameters to the service looking for keystrokes.
Read the inputlircd manpage (man 8 inputlircd) to find out which parameters you need/want to use. The below is what I had to put in the file:
# Options to be passed to inputlirc.
EVENTS="/dev/input/event0"
OPTIONS=-m 0 -c
-m 0 = -m sets the lowest keycode to pass to the daemon
I also use -c to allow to capture the modifier keys (CTRL, SHIFT, ALT) so they will be part of a keystroke instead of generating their own events. This will make it possible to use combinations like SHIFT + F1 for command execution.
After editing and saving the file, enable and (re)start the inputlirc service:
Snooping for keypress events
Unless you know all the keycodes you are going to use for running commands, now is a good time to check what lircd receives on specific keypresses. Run the command to snoop for keypresses in the shell, and press keys on the keyboard connected to the computer (this could be connected through USB, PS/2, Bluetooth, IR, whatever)
The irexec service
To make the irexec service restart when inputlirc is restarted (due to a key configuration change), the service startup file has to be slightly modified:
/lib/systemd/system/irexec.service:
[Unit]
Documentation=man:irexec(1)
Documentation=http://lirc.org/html/configure.html
Documentation=http://lirc.org/html/configure.html#lircrc_format
Description=Handle events from IR remotes decoded by lircd(8)
After=inputlirc.serviceRequires=inputlirc.service
...
Add the lines marked above, then rebuild the systemd service configuration file and enable and start the irexec service:
Configuring what to run on keypresses
The file “/etc/lirc/irexec.lircrc” contains the configuration for what commands to run when selected key(combinations) are used. Wipe out all the defaults in there and add something useful. Below is the updated, more generic configuration I use on my PiOS for the piStorm now, just mapping some keys to a script with a similar name:
begin
prog = irexec
button = SHIFT_KEY_F1
config = /home/pi/irexec/shift_f1.sh
end
begin
prog = irexec
button = SHIFT_KEY_F2
config = /home/pi/irexec/shift_f2.sh
end
begin
prog = irexec
button = SHIFT_KEY_F3
config = /home/pi/irexec/shift_f3.sh
end
begin
prog = irexec
button = SHIFT_KEY_F4
config = /home/pi/irexec/shift_f4.sh
end
begin
prog = irexec
button = SHIFT_KEY_F5
config = /home/pi/irexec/shift_f5.sh
end
begin
prog = irexec
button = SHIFT_KEY_F6
config = /home/pi/irexec/shift_f6.sh
end
begin
prog = irexec
button = SHIFT_KEY_F7
config = /home/pi/irexec/shift_f7.sh
end
begin
prog = irexec
button = SHIFT_KEY_F8
config = /home/pi/irexec/shift_f8.sh
end
begin
prog = irexec
button = SHIFT_KEY_F9
config = /home/pi/irexec/shift_f9.sh
end
begin
prog = irexec
button = SHIFT_KEY_F10
config = /home/pi/irexec/shift_f10.sh
end
begin
prog = irexec
button = SHIFT_KEY_F11
config = /home/pi/irexec/shift_f11.sh
end
begin
prog = irexec
button = SHIFT_KEY_F12
config = /home/pi/irexec/shift_f12.sh
end
begin
prog = irexec
button = CTRL_SHIFT_KEY_F12
config = /home/pi/irexec/ctrl_shift_f12.sh
end
Whenever you have made a change to /etc/lirc/irexec.lircrc, you need to restart inputlirc (which automatically restarts liexec):
systemctl restart inputlirc
Action scripts in /home/pi/irexec
These scripts can be updated without having to restart inputlirc. Be sure to set the execute flag on them (chmod 755 /home/pi/irexec/*.sh)
For the piOS installation for the piStorm, the content of my configuration-switching scripts are as follows:
/home/pi/irexec/shift_f1.sh (the F-keys 1-10 with the SHIFT key held down):
In a similar way, I have set up the other shift-f-key combinations as shown in the video.
I have used SHIFT+F12 for a safe reboot, and CTRL+SHIFT+F12 for a shutdown of the pi. If running piStorm in RTG mode there can be a delay of about 1 minute before something happens.
/home/pi/irexec/shift_f12.sh:
#!/bin/sh
sudo systemctl stop pistorm
sudo reboot
/home/pi/irexec/ctrl_shift_f12.sh:
#!/bin/sh
sudo systemctl stop pistorm
sudo halt -p
You can check the status of the piStorm service to see that it received the shutdown command:
Revised start-emulator.sh script
Because I want to run the wip-crap version of the emulator at some points, I have added a check for the mentioning of “wip-crap” in the configuration file that is going to be used, then depending on its existance or not, launching the emulator from the correct directory:
#!/bin/sh
if grep -q wip-crap "/home/pi/default.cfg"; then
echo "wip-crap"
cd /home/pi/pistorm-bnu/
else
echo "main"
cd /home/pi/pistorm/
fi
sudo ./emulator --config /home/pi/default.cfg
exit 0
To enable the wip-crap version, just add a comment in the beginning of the configuration such as:
Changing the first disk and my case to Synology support
Now it was time to replace the first disk. As I assumed this would never go wrong (!) and did not plan to document the upgrade, I did not take out any information about the partitions, mdraids and volumes during this first disk swap.
Basically it says: replace the disks one by one, start with the smallest and wait until completion before replacing the next.
For the first disk swap, I actually shut down my DS1517 before replacing the disk (many models, including DS1517, supports hot swapping the disks). When the disk was replaced and I powered up the DS1517, and as expected I got the “RAID degraded” beep.
Did a check that the new drive was recognized, and then started the repair of the storage pool. As this will usually take many hours, and this was done in the evening, I have no idea of the actual time spent for repairing (rebuilding) the pool. This was about 90% finished when I stopped looking at the status around midnight that day.
The next day, I see that it had “restarted” (lower percentage than yesterday), but this is actually the next step that is initiated directly after repairing the pool. It’s called “reshaping” and during that process other mdraids are changed and adjusted (if possible) against the new disk.
Changes during the first disk swap
These are only assumptions, because I did not take enough info in between swapping the disk and until about a third into reshaping.
At the point of changing the first disk (refer to the previous part of my article), my storage pool/volume consisted of two mdraids joined together:
md2: RAID 5 of sda5, sdb5, sdc5, sdd5, sde5: total size about 11.7TB
md3: RAID 1 of sdd6, sde6: total size of about 4.8TB
When I pulled the first drive (3TB) and replaced it with a 14TB drive, I assume the partition table on that disk was created like this (status pulled from the mid of reshaping after first disk swap, so I’m pretty sure this is correct):
sda5 was matched up with the size of the old sda5 (and the ‘5’-partitions on the other disks)
sda6 was also created in either the step before rebuild, or right before reshaping (this partition match the size with the ‘6’-partitions on sdd and sde.
Because the (14T) disk is larger than the previous largest (8TB) one, there are some non-partitioned wasted space (about 5.8TB which will come into use after the next disk swap).
Reshaping
Again, I have not taken any full status dumps so that my information can be confirmed, but this is what I see afterwards, and adding my guesses to it because of the better logging of later disk swaps.
After the storage pool was repaired, reshaping started automatically. During this step, the RAID1 consisting of sdd6 and sde6 (md3) were changed into RAID5 consisting of sda6, sdd6 and sde6.
At about 30% into the reshaping phase, my NAS went unresponsive (disconnected both shell and GUI), and I had to wait all day until I came home and did a hard reset on it and hoped everything went well..
In the meantime, I logged a case to the Synology support (see “Part 2b” of this article). They were not of any direct help, and the hard reset did take the NAS back to continuing the reshaping process.
Maybe a too much promising title for this post, but this is my guesswork on how SHR works when replacing drives. If anyone have a spare DS1517 (or later device, with at least 4 slots) to donate, I will investigate this further, cannot afford to do it on my primary NAS because of risk of loosing data – and now even not possible without upgrading the disks again to larger ones).
I will also post here my case (more or less in full) sent to Synology when the NAS got unresponsive (crashed) during the rebuild/reshaping process.
My short explanation is that it is a software RAID that is able to maximize the utilization of mixed sized hard drives. For simplicity, Synology illustrates this with drives varying of 500GB to 2TB (in 500GB increments), possibly fooling some people to think that the disks are always split into 500GB partitions.
My findings while expanding my DS1517 (from 3TB, 3TB, 3TB, 8TB, 8TB to all 14TB) is that the remaining space of the drives are splitted in as few parts as possible to obtain the maximum available space (after setting aside about 2.5GB for the DSM (operating system) and 2GB for swap).
Replacing disks and rebuilding the RAID
Before I replaced the first disk, I actually forgot to view and save down the info about the partitions, mdraid volumes and logical volumes (I might have that somewhere else, but I will not look for it now). Based on how it looked after the first disk had been replaced, and the rebuild was done (in the process of reshaping) it should have been something like this:
Note: The partition types for sd[a-c][1-2] seems incorrect as these where changed to “fd” later on during the process, or it might have been something changed by Synology on later DSM versions (but not at the point of updating DSM).
Partitions 1-2 are the system and swap partitions on all the drives, sized 2.5GB respectively 2GB.
Partition 5 is a part of the storage space available in the volume on the NAS. In this case it is about 2.9TB in size (the maximum available on the smallest disks).
Partition 6 is the second part of the total storage space. At this time those partitions are about 4.8TB in size.
mdraid volumes
Out of the partitions above, the Synology creates these mdraid volumes:
md0: RAID 1 of sda1, sdb1, sdc1, sdd1, sde1: total size 2.5GB used for DSM
md1: RAID 1 of sda1, sdb2, sdc2, sdd2, sde2: total size 2GB used for swap
md2: RAID 5 of sda5, sdb5, sdc5, sdd5, sde5: total size about 11.7TB
md3: RAID 1 of sdd6, sde6: total size of about 4.8TB
LVM logical disk
md2 and md3 are joined together into a logical disk using LVM, which gives about 16.5TB space in total for the storage volume on the NAS (Synology DSM says 15.5TB, but the difference is only because of how I estimate the space and how Synology does – I just take the block count, divide by two, then use a one decimal precision – which is adequate enough for this description).
DSM Storage Manager before replacing the first disk
Edit the php.ini file, check/change extensions dir to where it is located. If PHP was installed through DSM it should be something like ‘/volume1/\@appstore/PHP7.0’ extension_dir = "/volume1/@appstore/PHP7.0/usr/local/lib/php70/modules/"
Enable the extensions you wish to use: extension = mysqli.so
extension = phar.so
extension = openssl.so
extension = zip.so
extension = curl.so