LaKademy 2018 – Third and Fourth Days (October 13th and 14th)

The third day of LaKademy 2018 was my last day participating on the event.

During October 13th, we started the day with a promo reunion. This reunion was done to discuss about some plans and actions for the Latin American KDE community over the next year. Some decisions were made and topics were discussed involving KDE participation in some events, promotion of our own events in Latin America, including LaKademy 2019 and Kafé com Qt, and some details in general about our community.

45240878072_c28ef37799_k
Promo reunion.

After the promo reunion, I decided to build and take a look at the code of Atcore, which is a library containing the main components for the Atelier, that is an Open Source 3D printing application developed by KDE Community. I noticed that most of the enums used in src/core/atcore where not determined by its scope, which maybe could generate an enum name conflict in the future. So I decided to contribute with this little change, adapting the code to reference enums as C++11 scoped enums, providing assistance to the possible name conflicts.

During the afternoon, I continued my work in KDE Partition Manager, implementing that RAID resize functionality in kpmcore, where I decided to include the operations of growing and shrinking of RAID disks. Then, I fixed some bugs in the creation of RAID 1, 4, 5, 6 and 10, because a check-up was being done by mdadm before mirroring the devices and my previous implementation was ignoring it. This mdadm check was necessary to confirm which device size should be prevailed before mirroring.

photo5051009856995174405
#KDEis22

I had to come back to my city during the morning of October 14th. In this date the anniversary of 22 years of KDE was celebrated. The KDE people that continued in Florianópolis decided to eat a cake to celebrate. I couldn’t participate, but I am very happy to be part of this inspiring community. Happy birthday, KDE! I hope that this great community keeps succeeding over the years to make the world a better place with its great software! #KDEis22 🙂

LaKademy 2018 – Second Day (October 12th)

Every new code follows new bugs.

During the second day of LaKademy I was more focused on resolution of bugs in the code that I implemented during the first day for KDE Partition Manager. During the afternoon, I decided to start RAID resizing and discussed with Andrius Stikonas on calamares IRC channel about some RAID functionalities related to resizing disks and about bugs on both LVM and RAID. I also talked with some KDE coders here in LaKademy about Qt and C++, learning more about it.

OLYMPUS DIGITAL CAMERA
Image 1: Concentrated while coding.

Also we took some group photos. It was very nice to participate.

44365101645_07eb28a1c2_o
Image 2: LaKademy 2018 group photo.

And here is a more precise list of what I have done on coding and some things that I am planning to complete during the third day:

  • Solved that bug commented in this post, where the device mapper identifies an old partition table from a deleted RAID device that contains the same physical volumes as a newer one. Actually I don’t know if udev has some method to clear it when removing a logical device, but I decided to erase the partition table from the deleted device just before deleting it.
  • Fixed some segfaults when rescanning LVM and RAID. This bug was my fault (shame!), I forgot to check if a logical device contains a partition table when updating the partition nodes information.
  • Started the implementation of RAID resizing. I focused on adapting the LVM GUI, using VolumeManagerDevice references instead of LVM directly, to provide adaptation for all the possible logical volumes resizing in the future. I will continue working on this resizing implementation during the third day.

The third day of LaKademy 2018 will be my last day here in Florianópolis. I will be traveling back to Salvador (my city) in the morning of October 14th, because I need to do another travel on October 15th to an event related to the university. I will continue enjoying LaKademy during this last day, it is being great! 🙂

 

LaKademy 2018 – First Day (October 11th)

LaKademy 2018 has started!

It is happening in the city of Florianópolis in Brazil. It is being a nice opportunity for me to meet some other KDE contributors from Latin America. We are discussing ideas for KDE in Latin America and everybody is working on something related to the community. The event will continue until October 14th. Below you can see some photos:

photo5042111741319817248
Aracele and Filipe talking about the event.
photo5044202183507159112
Coding and sharing ideas with Dórian and Pedro.
photo5044291982683383814
Enjoying some arab food.

I am enjoying this time to discuss with some older contributors and to share ideas with newer contributors. I am also submitting some patches to KDE Partition Manager. Here is a brief list of what I have done to KDE Partition Manager in this first day of LaKademy:

  • Corrections for SoftwareRAID, including the process of loading physical volumes instances for the device object.
  • Support to RAID activation through GUI. Before you could only activate RAID through the process of disk rescanning. Maybe I can also implement this functionality for LVM as well in the future.
  • Support to remove a RAID device from mdadm.conf file, allowing the user to remove inactive RAID devices. There is a particular bug in this process, because after creating a new RAID device with the same physical volumes contained in a deleted device, the device mapper is identifying the old partition table. I must look if only erasing each physical volume superblock solves it or if I need to erase the RAID partition table before erasing the device.

Well, this is it for now. We started the second day and tomorrow I will be posting more about what I have been working. 🙂

FOSS Contributions Log – Aug/Sept 2018

I am starting a new type of post in my blog, which I will be using as a type of monthly report/log to track my free software contributions. 🙂

This post will be related not only to the last month (September), but I have decided to include my experiences from August as well. In the last month, I was very occupied with some assignments from the university after the two week travel that I had in August for Akademy and ERBASE (which is a congress that I had presented a paper). I am in the end of this semester in the university, so I am anxious for my vacations to code more in the projects that I contribute to.

Well, following this brief comment about Akademy, I will start talking about what I have done in KDE in the weeks of August/September. I am still working in that RAID patch on KDE Partition Manager, where I still got some problems with device mapping and udev. The RAID arrays are not been mapped as I expected. Also there are some bugs related to partition creation inside of RAID and another one related to udev, that is keeping the device busy, which will raise some errors when you try to do any disk operation.

Besides that, I am aiming to fix these problems soon and make RAID great in kpmcore. Here is a list with some important patches that I have done to kpmcore meanwhile (as you can see in raid-support branch):

  • Changed CreateVolumeGroupJob to support RAID, finally allowing RAID creation.
  • Implemented RAID deactivation.
  • Support for configuration of custom mdadm.conf paths in KDE Partition Manager.
  • mdadm.conf is being updated after RAID creation through an application helper (for compatibility with the KAuth DBus service whitelist in kpmcore).
  • Improved the layout of Volume Group GUI in KDE Partition Manager.
  • Allowed RAID creation only with LinuxRaidMember, Unformatted or Unknown, to avoid deleting important partitions.

I am trying to finish RAID activation, there is just some work to do in kpmcore to finish it. I had implemented the GUI action in KDE Partition Manager, but I couldn’t finish the implementation of the Operations and Jobs in the library. Hope to get some time in October to contribute more and to finish these pending parts. Remembering that LaKademy 2018 is near and I will be participating there. And I got the opportunity to contribute to KDE as a Google Code-in 2018 mentor! YAY! 🙂

And about my contributions to other FOSS projects out of KDE umbrella…

I am starting communicating to mlpack library community to contribute there as well. I am a machine learning enthusiast and beginner researcher of this area, so I started studying about this library and I was thinking about contributing to this feature request, which will provide a simple CLI application to access mlpack’s artificial neural networks module. Actually I am just trying to understand its codebase as I have no previous experience with C++ template metaprogramming and this library is built using this paradigm. Such a nice opportunity to learn new things and a great challenge! Hope to write more news about my progress on it. 🙂

Akademy 2018 was great!

So Akademy 2018 has finished and it was a very impressive event. It happened in Vienna, Austria and it was my first opportunity to join in a KDE event, to travel to another country and to meet people from the community!

I couldn’t participate during the first day of the event (August 11th) because my flight delayed a little bit and I only arrived in Vienna by night. So in the first day I only had the opportunity to join the people for a drink and prove some Wiener schnitzel and food from Austria.

38028954_859909140871006_1021053802322591744_n
Yummy!

In the second day I could watch some talks and presentations, meet some people from KDE, including Adriaan de Groot, that was one of my GSoC 2018 mentors. This was the first time that I’ve talked with him personally and he is a very nice person. We discussed about my GSoC project and some future implementations for Calamares. It was nice to meet you, Ade!

DkkViHLWsAcqx7U.jpg:large
The brazilian RGB (Filipe, Eliakin and I).

There were some Birds of Feather (where the Akademy attendees group together based on a shared interest and carry out discussions), workshops and meetings on the remaining days of the event. I got the opportunity to participate in some of them and even suggested and helped in coordinating one BoF about Google Summer of Code and Season of KDE programs. It was pretty nice to meet the GSoC/SoK admins and the other students, and to know a little bit about how they went through their projects during this year.

DSC01882
KDE Brasil.

I also hacked kpmcore/partitionmanager/Calamares a little bit and submitted some patches. Now partitionmanager can create RAID! I could work on this improvement and fixed some minor issues related to the RAID array visualization. Also helped in solving some bugs related to the KAuth patch in kpmcore, which where happening during kpmcore backend change through KDE Partition Manager.

Vienna is a very beautiful city with an amazing food (which is very different from the brazilian, by the way). I learned some basic german words, visited some places and learned about the city’s history. Kahlenberg has one of the most beautiful views that I could ever contemplate.

IMG_20180815_200724_091
Daytrip in Kahlenberg.

This is it! Hope to see you soon, people from KDE around the world! I will miss you all and the city of Vienna a lot. 🙂

GSoC 2018 – Coding Period (June 26th to July 15th): RAID on Linux

I’ve passed in the second evaluation of Google Summer of Code 2018. I am ready for the third phase, but before that I’ll give some updates about how my progress with RAID on kpmcore is going. This post will explain how RAID management works on Linux.

Linux and RAID devices

First of all, you must know which types of RAID exist in Linux. I am not talking about RAID levels (i.e.: 0 [striping], 1 [mirroring], 4, 5, 6 and 10…), I explained a little bit about it in my GSoC proposal. So what I am talking in this case is about two different tools used by Linux to manage two different types of RAID identified by the kernel, which are:

  • Fake RAID (ATA RAID, managed by dmraid tool).
  • Software RAID (MD RAID, managed by mdadm tool).

Fake RAID (ATA RAID)

Fake RAID devices are set up by a hardware RAID controller, which are detected by device-mapper. The device-mapper is a component of the Linux kernel. It allows the Linux to do all the block device management. dmraid uses libdevmapper and the device-mapper kernel driver to manage this specific type of RAID, including the process of discovering, configuring and activating ATA RAID devices. You can see below the types of RAID arrays supported by dmraid:

asr     : Adaptec HostRAID ASR (0,1,10)
ddf1    : SNIA DDF1 (0,1,4,5,linear)
hpt37x  : Highpoint HPT37X (S,0,1,10,01)
hpt45x  : Highpoint HPT45X (S,0,1,10)
isw     : Intel Software RAID (0,1,5,01)
jmicron : JMicron ATARAID (S,0,1)
lsi     : LSI Logic MegaRAID (0,1,10)
nvidia  : NVidia RAID (S,0,1,10,5)
pdc     : Promise FastTrack (S,0,1,10)
sil     : Silicon Image(tm) Medley(tm) (0,1,10)
via     : VIA Software RAID (S,0,1,10)
dos     : DOS partitions on SW RAIDs

Software RAID (MD RAID)

Software RAID devices are managed by mdadm. The device-mapper is not aware of the RAID arrays created with mdadm tool. It supports the following RAID levels: 0, 1, 4, 5, 6, and 10. Below you can see mdadm operation modes (as descripted by its manpage):

  • Assemble:

Assemble the components of a previously created array into an active array. Components can be explicitly given or can be searched for. mdadm checks that the components do form a bona fide array, and can, on request, fiddle superblock information so as to assemble a faulty array.

  • Build:

Build an array that doesn’t have per-device metadata (superblocks). For these sorts of arrays, mdadm cannot differentiate between initial creation and subsequent assembly of an array. It also cannot perform any checks that appropriate components have been requested. Because of this, the Build mode should only be used together with a complete understanding of what you are doing.

  • Create:

Create a new array with per-device metadata (superblocks). Appropriate metadata is written to each device, and then the array comprising those devices is activated. A ‘resync’ process is started to make sure that the array is consistent (e.g. both sides of a mirror contain the same data) but the content of the device is left otherwise untouched. The array can be used as soon as it has been created. There is no need to wait for the initial resync to finish.

  • Follow (or Monitor):

Monitor one or more md devices and act on any state changes. This is only meaningful for RAID1, 4, 5, 6, 10 or multipath arrays, as only these have interesting state. RAID0 or Linear never have missing, spare, or failed drives, so there is nothing to monitor.

  • Grow:

Grow (or shrink) an array, or otherwise reshape it in some way. Currently supported growth options including changing the active size of component devices and changing the number of active devices in Linear and RAID levels 0/1/4/5/6, changing the RAID  level between 0, 1, 5, and 6, and between 0 and 10, changing the chunk size and layout for RAID 0,4,5,6,10 as well as adding or removing a write-intent bitmap.

  • Incremental:

Add a single device to an appropriate array. If the addition of the device makes the array runnable, the array will be started. This provides a convenient interface to a hot-plug system. As each device is detected, mdadm has a chance to include it in some array as  appropriate. Optionally, when the –fail flag is passed in we will remove the device from any active array instead of adding it.

  • Manage:

This is for doing things to specific components of an array such as adding new spares and removing faulty devices.

  • Misc:

This is an ‘everything else’ mode that supports operations on active arrays, operations on component devices such as erasing old superblocks, and information gathering operations.

After creating your RAID arrays using mdadm tool, you can save your devices configuration in the file /etc/mdadm.conf (or /etc/mdadm/mdadm.conf, depending on your distro). You can use this file to reassemble your arrays and to make the kernel know which arrays should be loaded. Your loaded devices will be listed in /proc/mdstat.

Conclusion

This GSoC project will focus on the implementation of Software RAID devices manipulation by kpmcore, KDE Partition Manager and Calamares. kpmcore will communicate with mdadm to manipulate these devices. I’ve finished implementing the support to the visualization of SoftwareRAID by kpmcore and KPM, it now loads these devices and allows the user to manipulate its partitions.

For now, I am improving the code to be more stable and soon I’ll write a more detailed post about it. There is a group of commits related to it in kauth branch on both of kpmcore and partitionmanager repositories. Also I am testing how kpmcore deals with multiple RAID levels. There were some problems, such as the partition table loading and RAID partition name schemes when creating/removing them, but my mentors are helping me a lot with these issues.

Going to Akademy 2018

I am going to Akademy this year. It will happen in Vienna, Austria between August 11th and August 17th.

I will talk there about my experiences during Season of KDE 2018 and Google Summer of Code 2018, explaining my work and progress in KDE Partition Manager, kpmcore and Calamares.

This will be a great opportunity to meet people from KDE Community, share ideas with them and know more about some other KDE projects. I am sure that it will inspire me a lot as an open source contributor.

🙂

GSoC 2018 – Coding Period (June 18th to June 26th): Finishing LVM VG support and starting RAID implementation

I got some good news to tell in this post, but it will be a brief report about it all.

I’ve finished LVM VG complete support to Calamares, including resize, deactivate and remove operations. All my progress is actually related to my PR from the last week (I’ve changed it’s name, because I decided to include the remaining LVM implementations on it). This PR got some dependency issues with kpmcore’s latest versions and the code needs some refactoring, but you can see it here:

[partition] Finish LVM Volume Group support

I’ve changed Calamares’ Partition Page to include a group of buttons related to VG operations. Now there is a row reserved for it, organized in a horizontal layout. I’ll talk more with Andrius and Adriaan (my GSoC mentors) about these buttons positioning.

Screenshot_20180626_014515

Image 1: Calamares Partition Page with the button group for VG operations.

Here is a brief description about each of these VG operations:

Resize Volume Group

It loads a GUI (which inherits from VolumeGroupBaseDialog, so it’s similar to the CreateVolumeGroupDialog GUI), containing all the PVs from the selected device and the available (i.e. unused) LVM PVs from the system. Then the user can select between the available partitions to grow or shrink the LVM VG.

Screenshot_20180626_015233

Image 2: Resize Volume Group GUI.

Deactivate Volume Group

This process unloads the partition model of the current LVM VG device, releasing each one of its LVs. It can be used when you got some logical volume (LV) on your VG, but wants to remove this VG. This job is called immediately (i.e. it will not be queued on JobQueue) and you can remove your VG after deactivation.

Screenshot_20180626_015543

Image 3: “test” VG after deactivating it.

Remove Volume Group

You can remove your VG in two cases: if you deactivated it or if you haven’t created any logical volume on it. After choosing this operation, it will enqueue a remove volume group job to the pending jobs list.

Conclusion

I started my studies about the RAID implementation during this week. First, I’ll be working on the kpmcore implementation, using the raid-support branch. I’ll write some more detailed posts about RAID during the next weeks.

See you later! 🙂

GSoC 2018 – Coding Period (May 28th to June 18th): First Evaluation and Progress with LVM VG

The first evaluation was done and I successfully passed! 🙂

I got some problems during the last weeks of Google Summer of Code which made me deal with some challenges. One of these challenges was caused by a HD physical problem. I haven’t made a backup of some work and had to rework again in some parts of my code. As I already knew how to proceed, it was faster than the first time.

I had to understand how the device loading process is made in Calamares to load a preview of the new LVM VG during its creation in Partition Page. I need to list it as a new storage device in this page and deal with the revert process. I’ve implemented some basic fixes and tried to improve it.

There were some problems when reverting pending operations, especially when you got an old LVM VG configured in your system. This problem was related to kpmcore scanDevice procedure. SfdiskBackend::scanDevice method is responsible to load a device based on its partition node, but it only was considering disk devices and ignoring logical devices.

LVM VGs were only loaded using SfdiskBackend::scanDevices method, which loads all the devices found in your system. Calamares uses scanDevice during revert process, to update the devices references according with the original ones. So, when some old VG was previously configured, the revert process tries to call scanDevice and was getting only a nullptr reference, causing an unexpected segfault.

I fixed this method to load the specified VG, but my solution needs to know all the physical devices contained in your system to do it, as you can see in this commit:

https://cgit.kde.org/kpmcore.git/commit/?id=358957641b2c7ff6b69d090c9e1fddba482c6817

As you can imagine, it can take a significant time of processing in some cases, because it will load all the devices every time when you need to load a specific VG. I need to improve it, using some other procedure, but I’m worried about the loading of LVM PVs, that will require to load each device which the PV is associated.

About my progress in Calamares during this time, you can take a look in some commits in my PR:

https://github.com/calamares/calamares/pull/984

Well, it’s being a great experience, but I need to improve some things in my work and in its processes, especially with the communication, I’m learning about how to be a more communicative person in open source projects. I’m getting a great knowledge! 🙂

GSoC 2018 – Coding Period (May 14th to May 28th): Initial implementation of LVM VG creation in Calamares

Coding period has finally started and I had done some work in the implementation of LVM Volume Groups creation in Calamares. In this post, I’ll explain how I have implemented it and how my work is progressing until now.

LVM VG creation GUI

As I said here, I planned to create a button to access LVM VG creation GUI in Partition page in Calamares.  This GUI should work in a similar way to the LVM VG creation GUI as seem in KDE Partition Manager. Also it was needed to create some VG widget hierarchy to reuse in other processes (i.e. resize LVM VGs and RAID operations).

Screenshot_20180526_092426Image 1: Partition page in Calamares. Look the “New Volume Group” button.

Screenshot_20180526_092704

Image 2: Create New Volume Group GUI in Calamares.

This interface is responsible to create LVM Volume Groups with the selected LVM PVs. After this process, the new LVM VG will be created and now you can create new LVM Logical Volumes in it.

New Classes

There are some brief descriptions about the new classes involved in this process:

  • partition/jobs/CreateVolumeGroupJob: Calamares::Job to create new VGs. I’m planning to create a VolumeGroupJob hierarchy, as seem in PartitionJob.
  • partition/gui/VolumeGroupBaseDialog: Base dialog to Volume Group operations.
  • partition/gui/CreateVolumeGroupDialog: Dialog to create Volume Groups. It derives from VolumeGroupBaseDialog.
  • partition/gui/ListPhysicalVolumeWidgetItem: QListWidgetItem made to store a physical volume reference.

Conclusion

I need to do some fixes before proceeding to the other goals of this project. I’m planning to create the resize VG GUI during this week and try to improve some things in the code that I already created before pushing it to my branch and making a PR to Calamares. I’ve created a video showing these initial implementations, as you can see here:

Video 1: Google Summer of Code 2018 – Initial implementation of LVM VG creation in Calamares

Until the next post. 🙂