This blog post is about Drutopia right now, not about all the ways we can improve it.
Building your website on Drutopia has the advantage of allowing multiple people to create content of different defined types that can be clearly related to one another, listed and presented in different ways, and filtered. This is the super-power of structured content.
Drutopia's main disadvantages are that free-form changing how it looks is slow and takes specialized knowledge (unlike Wix or Squarespace which are primarily page builders), and it does not have many themes to choose from (which WordPress has).
The defined types of content available in Drutopia are:
All of these types of content can be connected by Topic, for site-wide curated categorization and by Tags for site-wide free-form categorization. Most content types can be further distinguished by type (article type, person role or type, event type etc).
Content is composed of sections, which can be the usual WYSIWYG text editor that allows insertion of images and other media, or more specialized sections for image, video, or file. This capability to mix different kinds of sections on pages is under-utilized at present in Drutopia but can be extended to be able to embed forms, including donation forms, or listings of other content.
Additional content types, which do not have the benefits of automatic listing pages with faceted search because they are meant for one-off or unique content, are:
A lot of the long-term advantage is having structured content you can make use of in ever-evolving ways, rather than having an undifferentiated mass of pages that can only be sorted through slowly and with difficulty.
But here we are taking a step back and looking at the platform more generally.
Drutopia is open source free software
Weebly, Wix, Squarespace, or inferior options bundled with other services such as Mailchimp Website Builder and GoDaddy Website Builder, do not allow creating content of carefully defined types, with relationships and connections between content. There are also usually limitations or extra costs associated with having multiple user accounts, or multiple people logged in at the same time. They let you change how the whole site and different pages look pretty easily. Moving your content to different hosting but keeping the software that runs the site is impossible, and switching to another platform and keeping your content is difficult.
WordPress allows some structured content and some customization of the look of the whole site pretty easily. WordPress can be hosted in different places (beware proprietary plugins though). WordPress content exports well if you want to change to a different software platform. Drutopia can import WordPress content.
Drutopia comes with a set of useful kinds of content already defined, complete with listing pages that can be filtered by cross-site topics and within-section types. Drutopia has limited visual customization currently available without knowing HTML, CSS, how to make templates, and how to work with a local development environment. Drutopia content exports well if you want to change to a different software platform, although to get the full benefit the other platform will need Drupal's capability to have structured content with rich relationships among content.
I’ve been working on open source projects for a long time and contributing to Drupal for 6 years now.
And I want to share my experience and the things that helped me contribute to Drupal.
I think one of the first problems I had to face when I started contributing was picking up an issue from the Drupal issue queue and to start working on it. When I started, all the issues seemed very hard or complex (and some are), fortunately there are a list of issues for people who want to start contributing to Drupal, these issues have the Novice tag. The idea of these issues is for someone to feel the experience of working on an issue.
Some of the things to learn while working on novice issues are:
While working on novice issues is a good way to start, it is necessary to jump to issues not marked as novice as soon as we feel comfortable with the things listed above. The non-novice issues are where we can really learn how Drupal works.
A few ways to start working on non-novice issues are:
When feeling frustrated while working on an issue, remember the Thomas Edison quote "Genius is one percent inspiration, ninety-nine percent perspiration", while we keep trying and keep working and asking questions and trying new things, just don’t give up, eventually we will make it happen. When someone starts contributing, it is normal to feel like they are not good enough. Just keep trying!
Remember, contributing for most of the people is an unpaid labor, don’t feel disappointed if there is an issue where you spent a good amount of time and no one reviews it, there are issues that have been around for years which aren’t committed, but even if they aren’t, the next developer with your same problem will find your patch and use it. So even if your code is not part of a module/core it still helps.
Going to conferences and meeting Drupalistas is a good way to keep you motivated and to learn new things. It is fun to meet in person the Drupal.org users who helped you in the issue queue.
Another thing that might help to keep you motivated is to see your name at DrupalCores.com. There you can see a list of users and mentions: for every new mention/contribution your nick will gain a few place in that rank.
Sign up to be notified when Agaric gives a migration training:
Find It makes it easier for caretakers to find opportunities for their children online so that they can stay focused on taking care of themselves and their children.
Thanks for signing up to hear from Agaric on matters involving movements for justice and liberty!
Dave Onion has been building Drupal sites since the mid 2000s when his work with independent media, and various organizing projects led him to learn Drupal (starting around the release of Drupal 4.7) as a tool to be able hand off the reigns to non techies who needed to manage their own web sites.
Since then Dave has worked with a wide range of organizations providing front line web building support for social movements, from struggles to shut down prisons, independent news papers, the Occupy movement, movement gatherings and social spaces and more.
Dave has also been involved in a number of other tech projects including autonomous community controlled cell phone networks in Mexico, off grid solar projects supporting community struggles and land occupations and local mesh networks in his home town of Philadelphia.
He believes technologies should be firmly in control of communities who use them.
Over 8 years have passed since there was a DrupalCamp in tropical Nicaragua. With the help of a diverse group of volunteers, sponsors, and university faculty staff, we held our second one. DrupalCamp Lagos y Volcanes ("Lakes & Volcanoes") was a great success with over 100 people attending in 2 days. It was a big undertaking so we followed giants' footsteps to prepare for our event. Lots of the ideas were taken from some of the organizers' experience while attending Drupal events. Others came from local free software communities who have organized events before us. Let me share what we did, how we did it, and what the results were.
In the previous posts we talked about option to manage migrations as configuration entities and some of the benefits this brings. Today, we are going to learn another useful feature provided by the Migrate Plus module: migration groups. We are going to see how they can be used to execute migrations together and share configuration among them. Let’s get started.

The Migrate Plus module defines a new configuration entity called migration group. When the module is enabled, each migration can define one group they belong to. This serves two purposes:
To demonstrate how to leverage migration groups, we will convert the CSV source example to use migrations defined as configuration and groups. You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD configuration group migration (CSV source) whose machine name is ud_migrations_config_group_csv_source. It comes with three migrations: udm_config_group_csv_source_paragraph, udm_config_group_csv_source_image, and udm_config_group_csv_source_node. Additionally, the demo module provides the udm_config_group_csv_source group.
Note: The Migrate Tools module provides a user interface for managing migrations defined as configuration. It is available under “Structure > Migrations” at /admin/structure/migrate. This user interface will be explained in a future entry. For today’s example, it is assumed that migrations are executed using the Drush commands provided by Migrate Plus. In the past we have used the Migrate Run module to execute migrations, but this module does not offer the ability to import or rollback migrations per group.
The migration groups are defined in YAML files using the following naming convention: migrate_plus.migration_group.[migration_group_id].yml. Because they are configuration entities, they need to be placed in the config/install directory of your module. Files placed in that directory following that pattern will be synced into Drupal’s active configuration when the module is installed for the first time (only). If you need to update the migration groups, you make the modifications to the files and then sync the configuration again. More details on this workflow can be found in this article. The following snippet shows an example migration group:
uuid: e88e28cc-94e4-4039-ae37-c1e3217fc0c4
id: udm_config_group_csv_source
label: 'UD Config Group (CSV source)'
description: 'A container for migrations about individuals and their favorite books. Learn more at https://understanddrupal.com/migrations.'
source_type: 'CSV resource'
shared_configuration: nullThe uuid key is optional. If not set, the configuration management system will create one automatically and assign it to the migration group. Setting one simplifies the workflow for updating configuration entities as explained in this article. The id key is required. Its value is used to associate individual migrations to this particular group.
The label, description, and source_type keys are used to give details about the migration. Their value appear in the user interface provided by Migrate Tools. label is required and serves as the name of the group. description is optional and provides more information about the group. source_type is optional and gives context about the type of source you are migrating from. For example, "Drupal 7", "WordPress", "CSV file", etc.
To associate a migration to a group, set the migration_group key in the migration definition file: For example:
uuid: 97179435-ca90-434b-abe0-57188a73a0bf
id: udm_config_group_csv_source_node
label: 'UD configuration host node migration for migration group example (CSV source)'
migration_group: udm_config_group_csv_source
source: ...
process: ...
destination: ...
migration_dependencies: ...Note that if you omit the migration_group key, it will default to a null value meaning the migration is not associated with any group. You will still be able to execute the migration from the command line, but it will not appear in the user interface provided by Migrate Tools. If you want the migration to be available in the user interface without creating a new group, you can set the migration_group key to default. This group is automatically created by Migrate Plus and can be used as a generic container for migrations.
Migration groups are used to organize migrations. Migration projects usually involve several types of elements to import. For example, book reports, events, subscriptions, user accounts, etc. Each of them might require multiple migrations to be completed. Let’s consider a news articles migration. The "book report" content type has many entity reference fields: book cover (image), support documents (file), tags (taxonomy term), author (user), citations (paragraphs). In this case, you will have one primary node migration that depends on five migrations of multiple types. You can put all of them in the same group and execute them together.
It is very important not to confuse migration groups with migration dependencies. In the previous example, the primary book report node migration should still list all its dependencies in the migration_dependencies section of its definition file. Otherwise, there is no guarantee that the five migrations it depends on will be executed in advance. This could cause problems if the primary node migration is executed before images, files, taxonomy terms, users, or paragraphs have already been imported into the system.
It is possible to execute all migrations in a group by issuing a single Drush with the --group flag. This is supported by the import and rollback commands exposed by Migrate Tools. For example, drush migrate:import --group='udm_config_group_csv_source'. Note that as of this writing, there is no way to run all migrations in a group in a single operation from the user interface. You could import the main migration and the system will make sure that if any explicit dependency is set, they will be run in advance. If the group contained more migrations than the ones listed as dependencies, those will not be imported. Moreover, migration dependencies are only executed automatically for import operations. Dependent migrations will not be rolled back automatically if the main migration is rolled back individually.
Note: This example assumes you are using Drush to execute the migrations. At the time of this writing, it is not possible to rollback a CSV migration from the user interface. See this issue in the Migrate Source CSV for more context.
Arguably, the major benefit of migration groups is the ability to share configuration among migrations. In the example, there are three migrations all reading from CSV files. Some configurations like the source plugin and header_offset can be shared. The following snippet shows an example of declaring shared configuration in the migration group for the CSV example:
uuid: e88e28cc-94e4-4039-ae37-c1e3217fc0c4
id: udm_config_group_csv_source
label: 'UD Config Group (CSV source)'
description: 'A container for migrations about individuals and their favorite books. Learn more at https://understanddrupal.com/migrations.'
source_type: 'CSV resource'
shared_configuration:
dependencies:
enforced:
module:
- ud_migrations_config_group_csv_source
migration_tags:
- UD Config Group (CSV Source)
- UD Example
source:
plugin: csv
# It is assumed that CSV files do not contain a headers row. This can be
# overridden for migrations where that is not the case.
header_offset: nullAny configuration that can be set in a regular migration definition file can be set under the shared_configuration key. When the migrate system loads the migration, it will read the migration group it belongs to, and pull any shared configuration that is defined. If both the migration and the group provide a value for the same key, the one defined in the migration definition file will override the one set in the migration group. If a key only exists in the group, it will be added to the migration when the definition file is loaded.
In the example, dependencies, migration_tag, and source options are being set. They will apply to all migrations that belong to the udm_config_group_csv_source group. If you updated any of these values, the changes would propagate to every migration in the group. Remember that you would need to sync the migration group configuration for the update to take effect. You can do that with partial configuration imports as explained in this article.
Any configuration set in the group can be overridden in specific migrations. In the example, the header_offset is set to null which means the CSV files do not contain a header row. The node migration uses a CSV file that contains a header row so that configuration needs to be redeclared. The following snippet shows how to do it:
uuid: 97179435-ca90-434b-abe0-57188a73a0bf
id: udm_config_group_csv_source_node
label: 'UD configuration host node migration for migration group example (CSV source)'
# Any configuration defined in the migration group can be overridden here
# by re-declaring the configuration and assigning a value.
# dependencies inherited from migration group.
# migration_tags inherited from migration group.
migration_group: udm_config_group_csv_source
source:
# plugin inherited from migration group.
path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_people.csv
ids: [unique_id]
# This overrides the header_offset defined in the group. The referenced CSV
# file includes headers in the first row. Thus, a value of 0 is used.
header_offset: 0
process: ...
destination: ...
migration_dependencies: ...Another example would be multiple migrations reading from a remote JSON. Let’s say that instead of fetching a remote file, you want to read a local file. The only file you would have to update is the migration group. Change the data_fetcher_plugin key to file and the urls array to the path to the local file. You can try this with the ud_migrations_config_group_json_source module from the demo repository.
What did you learn in today’s blog post? Did the know that migration groups can be used to share configuration among different migrations? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.
Next: What is the difference between migration tags and migration groups in Drupal?
This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.
Agaric builds tools for medical and scientific communities to advance their work, enhance collaboration, and improve outcomes.

If you would like me to speak at your event, et me know the details below.
Benjamin Melançon of Agaric helped with a patch for the Drupal 7 version of Insert module.
Most companies are run hierarchically. Small companies might have a boss while larger companies usually have a CEO and board of directors or other form of corporate leadership.
We believe work can be done differently- organized democratically and transparently as a worker-owned cooperative. There are 7 principles that cooperatives follow:
This means that each employee is also an owner and we make decisions democratically, fostering a culture of honesty and accountability. Our focus on community ensures that quality work comes first, always contributing back to the software commons when possible. Finally, in a world of unprecedented wealth disparity, we hope to serve as a model for what a business should be.
In recent articles, we have presented some recommendations and tools to debug Drupal migrations. Using a proper debugger is definitely the best way to debug Drupal be it migrations or other substems. In today’s article, we are going to learn how to configure XDebug inside DrupalVM to connect to PHPStorm. First, via the command line using Drush commands. And then, via the user interface using a browser. Let’s get started.
Important: User interfaces tend to change. Screenshots and referenced on-screen text might differ in new versions of the different tools. They can also vary per operating system. This article uses menu items from Linux. Refer the the official DrupalVM documentation for detailed installation and configuration instructions. For this article, it is assumed that VirtualBox, Vagrant, and Ansible are already installed. If you need help with those, refer to the DrupalVM’s installation guide.
First, get a copy of DrupalVM by cloning the repository or downloading a ZIP or TAR.GZ file from the available releases. If you downloaded a compressed file, expand it to have access to the configuration files. Before creating the virtual machine, make a copy of default.config.yml into a new file named config.yml. The latter will be used by DrupalVM to configure the virtual machine (VM). In this file, make the following changes:
# config.yml file
# Based off default.config.yml
vagrant_hostname: migratedebug.test
vagrant_machine_name: migratedebug
# For dynamic IP assignment the 'vagrant-auto_network' plugin is required.
# Otherwise, use an IP address that has not been used by any other virtual machine.
vagrant_ip: 0.0.0.0
# All the other extra packages can remain enabled.
# Make sure the following three get installed by uncommenting them.
installed_extras:
- drupalconsole
- drush
- xdebug
php_xdebug_default_enable: 1
php_xdebug_cli_disable: noThe vagrant_hostname is the URL you will enter in your browser’s address bar to access the Drupal installation. Set vagrant_ip to an IP that has not been taken by another virtual machine. If you are unsure, you can set the value to 0.0.0.0 and install the vagrant-auto_network plugin. The plugin will make sure that an available IP is assigned to the VM. In the installed_extras section, uncomment xdebug and drupalconsole. Drupal Console is not necessary for getting XDebug to work, but it offers many code introspection tools that are very useful for Drupal debugging in general. Finally, set php_xdebug_default_enable to 1 and php_xdebug_cli_disable to no. These last two settings are very important for being able to debug Drush commands.
Then, open a terminal and change directory to where the DrupalVM files are located. Keep the terminal open are going to execute various commands from there. Start the virtual machine by executing vagrant up. If you had already created the VM, you can still make changes to the config.yml file and then reprovision. If the virtual machine is running, execute the command: vagrant provision. Otherwise, you can start and reprovision the VM in a single command: vagrant up --provision. Finally, SSH into the VM executing vagrant ssh.
By default, DrupalVM will use the Drupal composer template project to get a copy of Drupal. That means that you will be managing your module and theme dependencies using composer. When you SSH into the virtual machine, you will be in the /var/www/drupalvm/drupal/web directory. That is Drupal’s docroot. The composer file that manages the installation is actually one directory up. Normally, if you run a composer command from a directory that does not have a composer.json file, composer will try to find one up in the directory hierarchy. Feel free to manually go one directory up or rely on composer’s default behaviour to locate the file.
For good measure, let’s install some contributed modules. Inside the virtual machine, in Drupal’s docroot, execute the following command: composer require drupal/migrate_plus drupal/migrate_tools. You can also create directory in /var/www/drupalvm/drupal/web/modules/custom and place the custom module we have been working on throughout the series. You can get it at https://github.com/dinarcon/ud_migrations.
To make sure things are working, let’s enable one example modules by executing: drush pm-enable ud_migrations_config_entity_lookup_entity_generate. This module comes with one migration: udm_config_entity_lookup_entity_generate_node. If you execute drush migrate:status the example migration should be listed.
With Drupal already installed and the virtual machine running, let’s configure PHPStorm. Start a new project pointing to the DrupalVM files. Feel free to follow your preferred approach to project creation. For reference, one way to do it is by going to "Files > Create New Project from Existing Files". In the dialog, select "Source files are in a local directory, no web server is configured yet." and click next. Look for the DrupalVM directory, click on it, click on “Project Root”, and then “Finish”. PHPStorm will begin indexing the files and detect that it is a Drupal project. It will prompt you to enable the Drupal coding standards, indicate which directory contains the installation path, and if you want to set PHP include paths. All of that is optional but recommended, especially if you want to use this VM for long term development.
Now the important part. Go to “Files > Settings > Language and Frameworks > PHP”. In the panel, there is a text box labeled “CLI Interpreter”. In the right end, there is a button with three dots like an ellipsis (...). The next step requires that the virtual machine is running because PHPStorm will try to connect to it. After verifying that it is the case, click the plus (+) button at the top left corner to add a CLI Interpreter. From the list that appears, select “From Docker, Vagrant, VM, Remote...”. In the “Configure Remote PHP Interpreter” dialog select “Vagrant”. PHPStorm will detect the SSH connection to connect to the virtual machine. Click “OK” to close the multiple dialog boxes. When you go back to the “Languages & Frameworks” dialog, you can set the “PHP language level” to match the same version from the Remote CLI Interpreter.


You are almost ready to start debugging. There are a few things pending to do. First, let’s create a breakpoint in the import method of the MigrateExecutable class. You can go to “Navigate > Class” to the project by class name. Or click around in the Project structure until you find the class. It is located at ./drupal/web/core/modules/migrate/src/MigrateExecutable.php in the VM directory. You can add a breakpoint by clicking on the bar to the left of the code area. A red circle will appear, indicating that the breakpoint has been added.
Then, you need to instruct PHPStorm to listen for debugging connections. For this, click on “Run > Start Listening for PHP Debugging Connections”. Finally, you have to set some server mappings. For this you will need the IP address of the virtual machine. If you configured the VM to assign the IP dynamically, you can skip this step momentarily. PHPStorm will detect the incoming connection, create a server with the proper IP, and then you can set the path mappings.
Let’s switch back to the terminal. If you are not inside the virtual machine, you can SSH into the VM executing vagrant ssh. Then, execute the following command (everything in one line):
XDEBUG_CONFIG="idekey=PHPSTORM" /var/www/drupalvm/drupal/vendor/bin/drush migrate:import udm_config_entity_lookup_entity_generate_node
For the breakpoint to be triggered, the following needs to happen:
vendor directory. DrupalVM has a globally available Drush binary located at /usr/local/bin/drush. That is not the one to use. For debugging purposes, always execute Drush from the vendor directory.XDEBUG_CONFIG environment variable set to “idekey=PHPSTORM”. There are many ways to accomplish this, but prepending the variable as shown in the example is a valid way to do it.When the command is executed, a dialog will appear in PHPStorm. In it, you will be asked to select a project or a file to debug. Accept what is selected by default for now. By accepting the prompt, a new server will be configured using the proper IP of the virtual machine. After doing so, go to “Files > Settings > Language and Frameworks > PHP > Servers”. You should see one already created. Make sure the “Use path mappings” option is selected. Then, look for the direct child of “Project files”. It should be the directory in your host computer where the VM files are located. In that row, set the “Absolute path on the server” column to /var/www/drupalvm. You can delete any other path mapping. There should only be one from the previous prompt. Now, click “OK” in the dialog to accept the changes.


Finally, run the Drush command from inside the virtual machine once more. This time the program execution should stop at the breakpoint. You can use the Debug panel to step over each line of code and see how the variables change over time. Feel free to add more breakpoints as needed. In the previous article, there are some suggestions about that. When you are done, let PHPStorm know that it should no longer listen for connections. For that, click on “Run > Stop Listening for PHP Debugging Connections”. And that is how you can debug Drush commands for Drupal migrations.

If you also want to be able to debug from the user interface, go to this URL and generate the bookmarklets for XDebug: https://www.jetbrains.com/phpstorm/marklets/ The IDE Key should be PHPSTORM. When the bookmarklets are created, you can drag and drop them into your browser’s bookmarks toolbar. Then, you can click on them to start and stop a debugging session. The IDE needs to be listening for incoming debugging connections as it was the case for Drush commands.

Note: There are browser extensions that let you start and stop debugging sessions. Check the extensions repository of your browser to see which options are available.
Finally, set breakpoints as needed and go to a page that would trigger them. If you are following along with the example, you can go to http://migratedebug.test/admin/structure/migrate/manage/default/migrations/udm_config_entity_lookup_entity_generate_node/execute Once there, select the “Import” operation and click the “Execute” button. This should open a prompt in PHPStorm to select a project or a file to debug. Select the index.php located in Drupal’s docroot. After accepting the connection, a new server should be configured with the proper path mappings. At this point, you should hit the breakpoint again.

Happy debugging! :-)
What did you learn in today’s blog post? Did you know how to debug Drush commands? Did you know how to trigger a debugging session from the browser? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.
And don't miss the final blog post in the series, on the many modules available for migrations to Drupal.
This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.
mkdir ~/sandbox
cd ~/sandbox
Now copy the four commands beginning with php from the top of Composer's quick install documentation. Paste them into the terminal in your sandbox directory. It's best to use Composer's commands, which they update to verify the hash after every release, which is why we don't have the commands for you to copy here. Once that is done, continue:
php composer.phar create-project drupal-composer/drupal-project:8.x-dev just-drupal --no-interaction
cd just-drupal
php ../composer.phar require drush/drush
If that works, you're good to go!
To use Drush:
vendor/bin/drushTo start PHP's built-in webserver and see the site, use:
php web/core/scripts/drupal quick-start standardIf it works, it will install Drupal with the Standard installation profile and log you in, opening your local site in your browser. Your terminal window, meanwhile, is dedicated to running the server. Open a new tab in your terminal, at the same location (cd ~/sandbox/just-drupal in our example) to be able to run more composer or other commands.
In the migration training for instance we have people use composer to get the code for the address module, so from ~/sandbox/just-drupal in our example we would run:
php ../composer.phar require drupal/addressAnd to enable the address module downloaded just above:
vendor/bin/drush en addressNote that the site must be 'running' with the php web/core/scripts/drupal quick-start command you can run at any time to get things started and log back in (don't worry if you get "Access Denied" while also seeing the administration menu (starting with "Manage" at the top left of your screen; this just means you were already logged in).
This minimalist approach might not work either for your computer! If it doesn't, there may be more PHP things to install. For instance, if you run into an error about SQLite, you may need to enable or install SQLite with PHP first. We'll update this blog post with further fixes and workarounds as they come up for our content migration or other training students.
You may have noticed that typing php ../composer.phar and vendor/bin/drush is pretty ugly. This can be fixed while retaining essentially the same setup as above by installing Composer globally (for GNU/Linux and Mac OS X, or with the Windows installer for Microsoft Windows) and installing the Drush launcher. Once you've done that, you'll be able to use composer instead of php ../composer.phar and drush instead of vendor/bin/drush.
This is for a local development environment or sandbox testing site only! PHP's built-in server, which is relied upon in the above, is absolutely not intended to be used in production environments. Neither, for Drupal, is SQLite, which we're also using. To repeat, this is not meant to be used live!
Updated. I knew it was out there, but didn't find this when i started writing. This is very similar in approach to an article last year by MediaCurrent celebrating this capability coming to Drupal. The main difference is that in our blog post here we use the Composer template for Drupal 8 projects. This avoids having Git as a requirement (but you should always develop by committing to Git!) and also starts with a best-practices composer setup. Distributions like Drutopia take the same approach.