Numquam eos enim voluptatem beatae doloribus. Ea provident dolor repellendus dolores adipisci laborum illo. Quas iusto vel architecto totam voluptas assumenda. Excepturi est inventore et architecto velit ratione.
React.js has become one of the top players in the JavaScript libraries world. Drupal has recently adopted the library to create admin interfaces. WordPress has rebuilt its WYSIWYG editor using React. This training aims to explain the basic concepts outside of the context of any particular CMS implementation. Throughout the training, a static site will be converted into a React application. No previous experience with the library is required.
Thanks for signing up to hear from Agaric on matters involving movements for justice and liberty!
In the previous posts we talked about option to manage migrations as configuration entities and some of the benefits this brings. Today, we are going to learn another useful feature provided by the Migrate Plus module: migration groups. We are going to see how they can be used to execute migrations together and share configuration among them. Let’s get started.

The Migrate Plus module defines a new configuration entity called migration group. When the module is enabled, each migration can define one group they belong to. This serves two purposes:
To demonstrate how to leverage migration groups, we will convert the CSV source example to use migrations defined as configuration and groups. You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD configuration group migration (CSV source) whose machine name is ud_migrations_config_group_csv_source. It comes with three migrations: udm_config_group_csv_source_paragraph, udm_config_group_csv_source_image, and udm_config_group_csv_source_node. Additionally, the demo module provides the udm_config_group_csv_source group.
Note: The Migrate Tools module provides a user interface for managing migrations defined as configuration. It is available under “Structure > Migrations” at /admin/structure/migrate. This user interface will be explained in a future entry. For today’s example, it is assumed that migrations are executed using the Drush commands provided by Migrate Plus. In the past we have used the Migrate Run module to execute migrations, but this module does not offer the ability to import or rollback migrations per group.
The migration groups are defined in YAML files using the following naming convention: migrate_plus.migration_group.[migration_group_id].yml. Because they are configuration entities, they need to be placed in the config/install directory of your module. Files placed in that directory following that pattern will be synced into Drupal’s active configuration when the module is installed for the first time (only). If you need to update the migration groups, you make the modifications to the files and then sync the configuration again. More details on this workflow can be found in this article. The following snippet shows an example migration group:
uuid: e88e28cc-94e4-4039-ae37-c1e3217fc0c4
id: udm_config_group_csv_source
label: 'UD Config Group (CSV source)'
description: 'A container for migrations about individuals and their favorite books. Learn more at https://understanddrupal.com/migrations.'
source_type: 'CSV resource'
shared_configuration: nullThe uuid key is optional. If not set, the configuration management system will create one automatically and assign it to the migration group. Setting one simplifies the workflow for updating configuration entities as explained in this article. The id key is required. Its value is used to associate individual migrations to this particular group.
The label, description, and source_type keys are used to give details about the migration. Their value appear in the user interface provided by Migrate Tools. label is required and serves as the name of the group. description is optional and provides more information about the group. source_type is optional and gives context about the type of source you are migrating from. For example, "Drupal 7", "WordPress", "CSV file", etc.
To associate a migration to a group, set the migration_group key in the migration definition file: For example:
uuid: 97179435-ca90-434b-abe0-57188a73a0bf
id: udm_config_group_csv_source_node
label: 'UD configuration host node migration for migration group example (CSV source)'
migration_group: udm_config_group_csv_source
source: ...
process: ...
destination: ...
migration_dependencies: ...Note that if you omit the migration_group key, it will default to a null value meaning the migration is not associated with any group. You will still be able to execute the migration from the command line, but it will not appear in the user interface provided by Migrate Tools. If you want the migration to be available in the user interface without creating a new group, you can set the migration_group key to default. This group is automatically created by Migrate Plus and can be used as a generic container for migrations.
Migration groups are used to organize migrations. Migration projects usually involve several types of elements to import. For example, book reports, events, subscriptions, user accounts, etc. Each of them might require multiple migrations to be completed. Let’s consider a news articles migration. The "book report" content type has many entity reference fields: book cover (image), support documents (file), tags (taxonomy term), author (user), citations (paragraphs). In this case, you will have one primary node migration that depends on five migrations of multiple types. You can put all of them in the same group and execute them together.
It is very important not to confuse migration groups with migration dependencies. In the previous example, the primary book report node migration should still list all its dependencies in the migration_dependencies section of its definition file. Otherwise, there is no guarantee that the five migrations it depends on will be executed in advance. This could cause problems if the primary node migration is executed before images, files, taxonomy terms, users, or paragraphs have already been imported into the system.
It is possible to execute all migrations in a group by issuing a single Drush with the --group flag. This is supported by the import and rollback commands exposed by Migrate Tools. For example, drush migrate:import --group='udm_config_group_csv_source'. Note that as of this writing, there is no way to run all migrations in a group in a single operation from the user interface. You could import the main migration and the system will make sure that if any explicit dependency is set, they will be run in advance. If the group contained more migrations than the ones listed as dependencies, those will not be imported. Moreover, migration dependencies are only executed automatically for import operations. Dependent migrations will not be rolled back automatically if the main migration is rolled back individually.
Note: This example assumes you are using Drush to execute the migrations. At the time of this writing, it is not possible to rollback a CSV migration from the user interface. See this issue in the Migrate Source CSV for more context.
Arguably, the major benefit of migration groups is the ability to share configuration among migrations. In the example, there are three migrations all reading from CSV files. Some configurations like the source plugin and header_offset can be shared. The following snippet shows an example of declaring shared configuration in the migration group for the CSV example:
uuid: e88e28cc-94e4-4039-ae37-c1e3217fc0c4
id: udm_config_group_csv_source
label: 'UD Config Group (CSV source)'
description: 'A container for migrations about individuals and their favorite books. Learn more at https://understanddrupal.com/migrations.'
source_type: 'CSV resource'
shared_configuration:
dependencies:
enforced:
module:
- ud_migrations_config_group_csv_source
migration_tags:
- UD Config Group (CSV Source)
- UD Example
source:
plugin: csv
# It is assumed that CSV files do not contain a headers row. This can be
# overridden for migrations where that is not the case.
header_offset: nullAny configuration that can be set in a regular migration definition file can be set under the shared_configuration key. When the migrate system loads the migration, it will read the migration group it belongs to, and pull any shared configuration that is defined. If both the migration and the group provide a value for the same key, the one defined in the migration definition file will override the one set in the migration group. If a key only exists in the group, it will be added to the migration when the definition file is loaded.
In the example, dependencies, migration_tag, and source options are being set. They will apply to all migrations that belong to the udm_config_group_csv_source group. If you updated any of these values, the changes would propagate to every migration in the group. Remember that you would need to sync the migration group configuration for the update to take effect. You can do that with partial configuration imports as explained in this article.
Any configuration set in the group can be overridden in specific migrations. In the example, the header_offset is set to null which means the CSV files do not contain a header row. The node migration uses a CSV file that contains a header row so that configuration needs to be redeclared. The following snippet shows how to do it:
uuid: 97179435-ca90-434b-abe0-57188a73a0bf
id: udm_config_group_csv_source_node
label: 'UD configuration host node migration for migration group example (CSV source)'
# Any configuration defined in the migration group can be overridden here
# by re-declaring the configuration and assigning a value.
# dependencies inherited from migration group.
# migration_tags inherited from migration group.
migration_group: udm_config_group_csv_source
source:
# plugin inherited from migration group.
path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_people.csv
ids: [unique_id]
# This overrides the header_offset defined in the group. The referenced CSV
# file includes headers in the first row. Thus, a value of 0 is used.
header_offset: 0
process: ...
destination: ...
migration_dependencies: ...Another example would be multiple migrations reading from a remote JSON. Let’s say that instead of fetching a remote file, you want to read a local file. The only file you would have to update is the migration group. Change the data_fetcher_plugin key to file and the urls array to the path to the local file. You can try this with the ud_migrations_config_group_json_source module from the demo repository.
What did you learn in today’s blog post? Did the know that migration groups can be used to share configuration among different migrations? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.
Next: What is the difference between migration tags and migration groups in Drupal?
This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.
When you think of training, perhaps you remember an event that you were sent to where you had to learn something boring for your job. The word training does not usually make people smile and jump for joy, that is unless you are talking about Drupal training. These gatherings spread the Drupal knowledge and increase diversity in the community of Drupal developers.
Join us for the next Drupal Global Training Day with our online full day session on getting started with Drupal on November 29th 2017. It will be held online from 9 AM to 4 PM EST.
A link to the live workshop on Zoom will be provided when you sign up!
The Drupal Association coordinates four dates each year as Global Training Days, designed to offer free and low-cost training events to new-to-Drupal developers and to create more Drupal talent around the world. The community is growing exponentially as more people learn how fun and easy it is to get involved and be productive. Volunteer trainers host these global events in person and online. In 2016, a Global Training Days Working Group was established to run this program. There is a Global Training Days group on Drupal.org that lists trainings around the world.
Mauricio Dinarte will be leading the training online on November 29th. As an introduction to Drupal a person needs to learn certain things that are specific to Drupal and some are not that intuitive. It is important to cover the very basics in terminology and process. An introductory class can include many things, but this list is what Mauricio covers during the day long event:
The outcome of the day of training is that everyone walks away understanding the main moving parts of Drupal and a bit about what they do. Of course you will not become a developer overnight, but you will have enough information to build a simple site and then explore more of Drupal on your own.
You can follow up with many online tutorials and by joining the Drupal group in your area and attending the meetings. At meetings you will connect with other people at different levels of skill and you will be helped and helpful at the same time! If there is no Drupal group in your area, I suggest you start one. It can start as easily as posting online that you will be at a specific location doing Drupal at a certain time of day - you will be surprised at who may show up. If no one shows up the first time, try again or try a different location. One of the best things about Drupal is the community and how large and connected we are. If you start a group, people will usually help it grow.
Bringing new people to Drupal is not only good for increasing the size of the member base, it also brings diversity and reaches people that may never have had an opportunity or access to a free training. Drupal trainings are usually held at a university in or near a city which attracts people from different backgrounds and cultures. We can also reach people that are not in a city or near a school by sharing online.
Have you ever thought about volunteering at a Global Training Days event? We have a blog about organizing your own Global Training Days workshop that can get you started. This is a great way to get to know the people in the community better, up your skills and perhaps share something you have learned. I learned much about programming by assisting developers at sprints and trainings. This is where the real fun begins. Learning does not have to be stressful, and in the Drupal community people are friendly and welcoming. No question is stupid and even those with no experience have valuable skills. Developers love people without prior experience because they make the perfect testing candidates for UI and UX. The down side is that Drupal is so captivating that you will probably not remain a newbie for very long, so enjoy it while it lasts.
One of the true highlights of Global Training Days is seeing all the people around the world gain valuable skills and share knowledge. We hope you can join us.
Community-managed categories is an idea from just about the beginning of my time as a web developer. As "Community-managed taxonomy" it was my submission to the 2007 Summer of Code, barely a couple years into my time as a Drupal developer.
The Drupal module Community Managed Taxonomy, or CMT, variously known as Community Managed Categories, seeks to bring the possibility of mass participation to categorization (taxonomy) and therefore potentially site structure.
As the project page put it:
Community-managed taxonomy (CMT) opens categorization of content to the site's community. Users can influence both what terms nodes are tagged with and how these terms are themselves organized.
It can also be used to make structured tags on the fly. Users do not need to be logged in to make or propose terms for content.
I very much hope to get back to this work. I never did get the module working—Agaric was a new company and my father became ill and was killed by the hospital that summer, and my mentor and i were not the best match—and after failing to complete the module and get the second stipend, the time and circumstance has not yet returned. Please contact us if you may be able to help make the time right in 2021!
This capability to allow a large community to coordinate in categorizing may be more salient when democratically-moderated mass communication is made possible. That's a goal i have pursued even longer, and i think it is more important and possibly logically prior to community-managed taxonomy. First we need community-managed communication, so that the resources we build together can reach, be distributed to, the people who should know. But building community-managed communication may bring up the need for community-managed categories directly, too— how do we decide what the groups are that we can manage communication within? That's a job for community-managed taxonomy, probably.
It is all tied up to what I have been aiming for for decades. This description is from 2008:
People have always needed something better than mailing lists— or other communication tools as they exist now. We need something that can reach millions of people (or billions– everyone) and still be open to everyone on an equal basis. Reaching everyone means filtering to reduce quantity and increase quality. Staying open to everyone means that the filtering must not be controlled by any group, must in some true sense belong to everyone.
The Internet has this potential. (The issue of access remains crucial, but is separate from helping the Internet reach closer to its potential for people who do have access.)
People Who Give a Damn is incorporated as a nonprofit organization to connect, without interference and without wasting anyone's time, everyone who gives a damn.
As the first technique to achieve this, anyone signed up to receive messages in a network can submit a message to be sent. The message will be publicly available immediately, but it will be moderated by a random sample of other people in the network pulled to serve jury duty. If they decide it is important enough to send to everyone in the network, it is sent to everyone. If not, the message will have more limited distribution to the sender's personal contacts and possibly to groups within the overall network to which the sender belongs.
This is simple. Yet it will make possible horizontal communication, not top-down few-to-many broadcasts, that is also mass communication. We need horizontal mass communication because we need mass cooperation and collaboration. It is possible with current technology, and necessary for the well-being of ourselves, our friends and family, our fellow humans, our Earth. Anyone interested in updates on progress or how they can help, please contact me.

If you would like me to speak at your event, et me know the details below.
In recent articles, we have presented some recommendations and tools to debug Drupal migrations. Using a proper debugger is definitely the best way to debug Drupal be it migrations or other substems. In today’s article, we are going to learn how to configure XDebug inside DrupalVM to connect to PHPStorm. First, via the command line using Drush commands. And then, via the user interface using a browser. Let’s get started.
Important: User interfaces tend to change. Screenshots and referenced on-screen text might differ in new versions of the different tools. They can also vary per operating system. This article uses menu items from Linux. Refer the the official DrupalVM documentation for detailed installation and configuration instructions. For this article, it is assumed that VirtualBox, Vagrant, and Ansible are already installed. If you need help with those, refer to the DrupalVM’s installation guide.
First, get a copy of DrupalVM by cloning the repository or downloading a ZIP or TAR.GZ file from the available releases. If you downloaded a compressed file, expand it to have access to the configuration files. Before creating the virtual machine, make a copy of default.config.yml into a new file named config.yml. The latter will be used by DrupalVM to configure the virtual machine (VM). In this file, make the following changes:
# config.yml file
# Based off default.config.yml
vagrant_hostname: migratedebug.test
vagrant_machine_name: migratedebug
# For dynamic IP assignment the 'vagrant-auto_network' plugin is required.
# Otherwise, use an IP address that has not been used by any other virtual machine.
vagrant_ip: 0.0.0.0
# All the other extra packages can remain enabled.
# Make sure the following three get installed by uncommenting them.
installed_extras:
- drupalconsole
- drush
- xdebug
php_xdebug_default_enable: 1
php_xdebug_cli_disable: noThe vagrant_hostname is the URL you will enter in your browser’s address bar to access the Drupal installation. Set vagrant_ip to an IP that has not been taken by another virtual machine. If you are unsure, you can set the value to 0.0.0.0 and install the vagrant-auto_network plugin. The plugin will make sure that an available IP is assigned to the VM. In the installed_extras section, uncomment xdebug and drupalconsole. Drupal Console is not necessary for getting XDebug to work, but it offers many code introspection tools that are very useful for Drupal debugging in general. Finally, set php_xdebug_default_enable to 1 and php_xdebug_cli_disable to no. These last two settings are very important for being able to debug Drush commands.
Then, open a terminal and change directory to where the DrupalVM files are located. Keep the terminal open are going to execute various commands from there. Start the virtual machine by executing vagrant up. If you had already created the VM, you can still make changes to the config.yml file and then reprovision. If the virtual machine is running, execute the command: vagrant provision. Otherwise, you can start and reprovision the VM in a single command: vagrant up --provision. Finally, SSH into the VM executing vagrant ssh.
By default, DrupalVM will use the Drupal composer template project to get a copy of Drupal. That means that you will be managing your module and theme dependencies using composer. When you SSH into the virtual machine, you will be in the /var/www/drupalvm/drupal/web directory. That is Drupal’s docroot. The composer file that manages the installation is actually one directory up. Normally, if you run a composer command from a directory that does not have a composer.json file, composer will try to find one up in the directory hierarchy. Feel free to manually go one directory up or rely on composer’s default behaviour to locate the file.
For good measure, let’s install some contributed modules. Inside the virtual machine, in Drupal’s docroot, execute the following command: composer require drupal/migrate_plus drupal/migrate_tools. You can also create directory in /var/www/drupalvm/drupal/web/modules/custom and place the custom module we have been working on throughout the series. You can get it at https://github.com/dinarcon/ud_migrations.
To make sure things are working, let’s enable one example modules by executing: drush pm-enable ud_migrations_config_entity_lookup_entity_generate. This module comes with one migration: udm_config_entity_lookup_entity_generate_node. If you execute drush migrate:status the example migration should be listed.
With Drupal already installed and the virtual machine running, let’s configure PHPStorm. Start a new project pointing to the DrupalVM files. Feel free to follow your preferred approach to project creation. For reference, one way to do it is by going to "Files > Create New Project from Existing Files". In the dialog, select "Source files are in a local directory, no web server is configured yet." and click next. Look for the DrupalVM directory, click on it, click on “Project Root”, and then “Finish”. PHPStorm will begin indexing the files and detect that it is a Drupal project. It will prompt you to enable the Drupal coding standards, indicate which directory contains the installation path, and if you want to set PHP include paths. All of that is optional but recommended, especially if you want to use this VM for long term development.
Now the important part. Go to “Files > Settings > Language and Frameworks > PHP”. In the panel, there is a text box labeled “CLI Interpreter”. In the right end, there is a button with three dots like an ellipsis (...). The next step requires that the virtual machine is running because PHPStorm will try to connect to it. After verifying that it is the case, click the plus (+) button at the top left corner to add a CLI Interpreter. From the list that appears, select “From Docker, Vagrant, VM, Remote...”. In the “Configure Remote PHP Interpreter” dialog select “Vagrant”. PHPStorm will detect the SSH connection to connect to the virtual machine. Click “OK” to close the multiple dialog boxes. When you go back to the “Languages & Frameworks” dialog, you can set the “PHP language level” to match the same version from the Remote CLI Interpreter.


You are almost ready to start debugging. There are a few things pending to do. First, let’s create a breakpoint in the import method of the MigrateExecutable class. You can go to “Navigate > Class” to the project by class name. Or click around in the Project structure until you find the class. It is located at ./drupal/web/core/modules/migrate/src/MigrateExecutable.php in the VM directory. You can add a breakpoint by clicking on the bar to the left of the code area. A red circle will appear, indicating that the breakpoint has been added.
Then, you need to instruct PHPStorm to listen for debugging connections. For this, click on “Run > Start Listening for PHP Debugging Connections”. Finally, you have to set some server mappings. For this you will need the IP address of the virtual machine. If you configured the VM to assign the IP dynamically, you can skip this step momentarily. PHPStorm will detect the incoming connection, create a server with the proper IP, and then you can set the path mappings.
Let’s switch back to the terminal. If you are not inside the virtual machine, you can SSH into the VM executing vagrant ssh. Then, execute the following command (everything in one line):
XDEBUG_CONFIG="idekey=PHPSTORM" /var/www/drupalvm/drupal/vendor/bin/drush migrate:import udm_config_entity_lookup_entity_generate_node
For the breakpoint to be triggered, the following needs to happen:
vendor directory. DrupalVM has a globally available Drush binary located at /usr/local/bin/drush. That is not the one to use. For debugging purposes, always execute Drush from the vendor directory.XDEBUG_CONFIG environment variable set to “idekey=PHPSTORM”. There are many ways to accomplish this, but prepending the variable as shown in the example is a valid way to do it.When the command is executed, a dialog will appear in PHPStorm. In it, you will be asked to select a project or a file to debug. Accept what is selected by default for now. By accepting the prompt, a new server will be configured using the proper IP of the virtual machine. After doing so, go to “Files > Settings > Language and Frameworks > PHP > Servers”. You should see one already created. Make sure the “Use path mappings” option is selected. Then, look for the direct child of “Project files”. It should be the directory in your host computer where the VM files are located. In that row, set the “Absolute path on the server” column to /var/www/drupalvm. You can delete any other path mapping. There should only be one from the previous prompt. Now, click “OK” in the dialog to accept the changes.


Finally, run the Drush command from inside the virtual machine once more. This time the program execution should stop at the breakpoint. You can use the Debug panel to step over each line of code and see how the variables change over time. Feel free to add more breakpoints as needed. In the previous article, there are some suggestions about that. When you are done, let PHPStorm know that it should no longer listen for connections. For that, click on “Run > Stop Listening for PHP Debugging Connections”. And that is how you can debug Drush commands for Drupal migrations.

If you also want to be able to debug from the user interface, go to this URL and generate the bookmarklets for XDebug: https://www.jetbrains.com/phpstorm/marklets/ The IDE Key should be PHPSTORM. When the bookmarklets are created, you can drag and drop them into your browser’s bookmarks toolbar. Then, you can click on them to start and stop a debugging session. The IDE needs to be listening for incoming debugging connections as it was the case for Drush commands.

Note: There are browser extensions that let you start and stop debugging sessions. Check the extensions repository of your browser to see which options are available.
Finally, set breakpoints as needed and go to a page that would trigger them. If you are following along with the example, you can go to http://migratedebug.test/admin/structure/migrate/manage/default/migrations/udm_config_entity_lookup_entity_generate_node/execute Once there, select the “Import” operation and click the “Execute” button. This should open a prompt in PHPStorm to select a project or a file to debug. Select the index.php located in Drupal’s docroot. After accepting the connection, a new server should be configured with the proper path mappings. At this point, you should hit the breakpoint again.

Happy debugging! :-)
What did you learn in today’s blog post? Did you know how to debug Drush commands? Did you know how to trigger a debugging session from the browser? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.
And don't miss the final blog post in the series, on the many modules available for migrations to Drupal.
This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.
La identidad de la marca de MASS Design necesitaba una actualización para que coincida con su nuevo trabajo de arquitectura y un impacto en expansión, con un sitio web que coincida. Querían un conjunto de herramientas que su personal pudiera emplear para afirmar con flexibilidad su identidad actualizada y contar historias de manera que pudieran seguir el ritmo de su evolución.
Al mismo tiempo, sabían que parte del diseño es la accesibilidad. Con una gran audiencia en toda África, fue crítico que el diseño no comprometiera el rendimiento, asegurando que los visitantes en dispositivos con poco ancho de banda pudieran leer sus historias tan fácilmente como sus contrapartes estadounidenses.
Todd Linkner tomó la nueva identidad visual de MASS Design y la tradujo en componentes que se podrían ensamblar en innumerables formatos, dando un amplio control creativo a los editores de contenido. Tomamos el sitio, construimos el sitio con la participación y aprobación regular de MASS.
Para lograr este diseño basado en componentes, Todd, como un diseñador notablemente experto en Drupal, tenía en mente el módulo Paragraph, y es lo que elegimos para implementar la funcionalidad (esto fue antes del período de dos años en que cada tercera charla en Drupal campamentos fue sobre el módulo de párrafos). En lugar de crear tipos de contenido con campos fijos siempre en el mismo orden, los párrafos nos permitieron definir una variedad de formatos de medios (carrusel, cuadrícula de imagen, texto, etc.) que podrían agregarse y reorganizarse a voluntad.
Los editores pueden personalizar la disposición de diferentes elementos de contenido con el módulo Paragraphs.
Anteriormente, el sitio web de MASS Design cargaba imágenes de varios megabytes en la página de inicio, cargando lentamente para todos los visitantes, particularmente aquellos con ancho de banda limitado. (También cuesta cientos de dólares al mes en ancho de banda de su proveedor de servicios en la nube).
El rendimiento es parte del diseño y cuando priorizamos a nuestros públicos con menos recursos, nos mantenemos fieles a nuestros valores de acceso e inclusión. Lo hicimos para MASS Design manteniendo el rendimiento al frente de las conversaciones desde el principio.
Drupal es un poderoso CMS que, de forma inmediata, cambiará el tamaño de las imágenes y realizará el almacenamiento en caché interno. Aseguramos que el nuevo sitio MASS se cargaría rápidamente con la configuración estratégica y el uso de herramientas clave de terceros.
Comenzamos con el almacenamiento en caché agresivo, pero sensible. Nos dirigimos a una red de distribución de contenido (CDN) con ubicaciones cercanas a Kigali, Ruanda, para que esto sucediera. (Lo más cercano que pudimos conseguir fue, Mombusa, Kenia). Usando la API de WebPageTest.org, realizamos scripts de pruebas de rendimiento para ver regularmente las velocidades de carga de páginas del sitio desde Johannesburgo, Sudáfrica, la más cercana (no muy cercana) que se ofrece en ese momento.
Se configuraron dos puntos de entrada al sitio. Uno para el público que utiliza el almacenamiento en caché de CDN y un punto de entrada separado para editores con un subdominio. Esta estructura mantuvo el CDN funcionando correctamente. Las funciones que se basan en las cookies, en particular el inicio de sesión como usuario autenticado, no funcionan con un CDN de bajo costo.
Desde la actualización de su sitio web, MASS ahora cuenta historias convincentes de su trabajo, lo que resulta en un 84% más de páginas vistas, más páginas por sesión y más usuarios que regresan. El tráfico del sitio desde Ruanda es diez veces mayor gracias en parte a la mejora del rendimiento del sitio.
Desde la actualización inicial del sitio, también hemos agregado formularios de donación en el sitio, una función de registro por correo electrónico para sus documentos de recursos clave y ampliado su conjunto de herramientas de narración guiadas por Paragraphs.
We do much more than build websites. We build tools that work for the social change we want to see. We teach and empower others to use digital technology safely, responsibly, and effectively. We participate in building and supporting communities that are founded on democratic-ownership and knowledge-sharing. We engage in a plethora of movements, and we look for ways to bridge the gaps between them. Akin to the mushrooms after which we are named, we work across vast nutritive networks so that we can bring nourishment to our human ecosystems. This page is our feeble attempt to showcase the meaning behind Agaric Tech Cooperative.
We have been building innovative web-based homes for scientific and health communities since 2008— most recently partnering with the National Institute for Children's Health Quality.
The Find It program locator and event discovery platform for cities, towns, counties, and most any geographical community. Use granular search to locate resources that otherwise may be hard to find!
The Drutopia Cooperative Platform by Agaric combines the ease of use you find from software as a service website builders (like Wix and Squarespace) with the freedom and control of free (libre) software. It is LibreSaaS like Ghost(Pro) or WordPress.com, but it is built for and with grassroots groups. Importantly, the platform is collectively controlled by the people who rely on it.
Host your school online. Stay safely at home while learning and interacting with others. We will host your online courses and video chat without connections to Google or Facebook and free from spyware or malware that can go undetected in proprietary software. We are using free and open source solutions that are currently used by colleges and schools around the world. Agaric can build upon these solutions to customize your experience and suit your needs.
CommunityBridge is an online video conferencing service that we provide freely to trusted friends and activists in our network. It is also where we host our community events. The video conferencing software behind it is BigBlueButton, a free and open source software built for educational communities to learn together safely in a digital space without fear of being passively tracked and surveilled.
Agaric hosts a weekly online gathering for people to meet and share what they have learned. Sometimes we talk about the logistics of worker owned Cooperatives and sometimes we have technical talks and we look at code. Every week we also get to know each other better and this leads to sharing work on projects. Learn more about Show and Tell. One of our most popular meetings was a presentation and discussion on a real life condition that affects quite a few developers. This condition is called imposter syndrome.
Agaric hosts Movie Nights, where we support organizations in sponsoring facilitated movie-watching events and engage in deep discussions that sometimes reveal actionable steps that community members can take together to overcome social and political issues. Currently we are only able to watch movies that are hosted on Vimeo, Youtube, or DailyMotion. If you would like to suggest a movie, we will facilitate the event. Make some popcorn, grab your favorite beverage, and invite your friends!
![]()
Drupal is a free software content management system powered by one of the largest communities in the software world— very much including Agaric.
We contribute to Drupal core and more than 100 modules that extend its functionality. All of these useful projects are free for anyone to download, use or modify for their own needs.
We teach development teams and solo coders in-person or online, with the program tailored to the problems you are trying to solve. We have practical experience in developing a multitude of web sites, migrating content, and running technology projects - we love to learn and to teach. We will impart the knowledge and skills you need to get work done, and done right.
Mauricio's epic month of migration tutorials is an expansive resource to teach you how to perform Drupal migrations. At some point, almost every website will need to be migrated to a newer or more secure platform, or to a platform with new and different features. Migrating to a new and different platform or server happens for many reasons during the life of a project. Be prepared!
Drutopia is a flexible content management system with many features built specially for grassroots organizations. It already helps groups share goals, call for action, collect donations, and report progress. Most important, Drutopia's developers seek to design ongoing improvements and capabilities with groups organizing for a better world.
mkdir ~/sandbox
cd ~/sandbox
Now copy the four commands beginning with php from the top of Composer's quick install documentation. Paste them into the terminal in your sandbox directory. It's best to use Composer's commands, which they update to verify the hash after every release, which is why we don't have the commands for you to copy here. Once that is done, continue:
php composer.phar create-project drupal-composer/drupal-project:8.x-dev just-drupal --no-interaction
cd just-drupal
php ../composer.phar require drush/drush
If that works, you're good to go!
To use Drush:
vendor/bin/drushTo start PHP's built-in webserver and see the site, use:
php web/core/scripts/drupal quick-start standardIf it works, it will install Drupal with the Standard installation profile and log you in, opening your local site in your browser. Your terminal window, meanwhile, is dedicated to running the server. Open a new tab in your terminal, at the same location (cd ~/sandbox/just-drupal in our example) to be able to run more composer or other commands.
In the migration training for instance we have people use composer to get the code for the address module, so from ~/sandbox/just-drupal in our example we would run:
php ../composer.phar require drupal/addressAnd to enable the address module downloaded just above:
vendor/bin/drush en addressNote that the site must be 'running' with the php web/core/scripts/drupal quick-start command you can run at any time to get things started and log back in (don't worry if you get "Access Denied" while also seeing the administration menu (starting with "Manage" at the top left of your screen; this just means you were already logged in).
This minimalist approach might not work either for your computer! If it doesn't, there may be more PHP things to install. For instance, if you run into an error about SQLite, you may need to enable or install SQLite with PHP first. We'll update this blog post with further fixes and workarounds as they come up for our content migration or other training students.
You may have noticed that typing php ../composer.phar and vendor/bin/drush is pretty ugly. This can be fixed while retaining essentially the same setup as above by installing Composer globally (for GNU/Linux and Mac OS X, or with the Windows installer for Microsoft Windows) and installing the Drush launcher. Once you've done that, you'll be able to use composer instead of php ../composer.phar and drush instead of vendor/bin/drush.
This is for a local development environment or sandbox testing site only! PHP's built-in server, which is relied upon in the above, is absolutely not intended to be used in production environments. Neither, for Drupal, is SQLite, which we're also using. To repeat, this is not meant to be used live!
Updated. I knew it was out there, but didn't find this when i started writing. This is very similar in approach to an article last year by MediaCurrent celebrating this capability coming to Drupal. The main difference is that in our blog post here we use the Composer template for Drupal 8 projects. This avoids having Git as a requirement (but you should always develop by committing to Git!) and also starts with a best-practices composer setup. Distributions like Drutopia take the same approach.
Agaric makes websites and applications that matter. We provide development services, training, and consulting to help define and meet your needs.

The ever intensifying climate crisis is an existential threat. It was brought upon by people designing and building technology reliant upon the extraction and burning of fossil fuel. That and an economic system reliant on ever increasing consumption, demanding energy production unsustainable to the planet.
From September 20th through September 27th millions of people across the globe skipped school, walked out of workplaces, and joined in the streets to demand bold, swift action for climate justice. Many in the tech industry, including ourselves, went on digital strike, shuttering our websites for the day to join in the action. In fact, we partnered with 350.org to improve the mapping tool used to help people organize and find climate strike actions.
But what does bold, swift action mean? What is required of us to respond to the enormity of this climate catastrophe?
One thing is for certain - it won't be fixed with the same values, systems and forces that ushered in this emergency.
There are many ideas out there on what we should do, with many different names. Whatever the specifics, the path to climate justice rests on our ability to move away from extractive relationships with our earth and quickly grow regenerative relationships instead. Many are calling this the Commons Transition. Those of us working in the tech industry have tremendous influence in participating in and assisting with this transition.
The Commons is a way to organize and manage resources collaboratively among the community of producers and users. They exist outside of both the public and private sector. The commons is flourishing all around us, particularly where indigenous communities have been able to defend themselves and their land. The Zapatista caracoles in Chiapas, Mexico, the confederalist communes of Rojava, and the Potato Park in Peru are just a few examples. All are a blend of retaining generations-long wisdom of living in right relation, an unlearning of the extractive patterns that have gained dominance, and innovating new ways of commoning.
Free and open-source software movements put into practice much of the Commons' values. Combining this approach to software development with cooperative, community-based economic models can help us build technology that is sustainable and helps facilitate and defend the commons.
As mentioned, the commons is all around us. Yet these commons are constantly battling enclosure, and many more commons projects need our support to take hold. The first step is to take stock of the commons we are currently part of. Millions of us are already members of credit unions, food cooperatives, worker-owners in cooperatives, purchasers of cooperatively produced goods and visitors to cooperatively managed parks and open spaces.
This morning, I walked to the Westwood Food Cooperative to buy produce and am now typing this essay at Kahlo's, a Mexican-American family-run restaurant with a plant-heavy menu. I am a worker-owner of Agaric, a tech cooperative. We're also members of MayFirst, a cooperative itself which provides the hosting infrastructure for our clients. The commons are everywhere.
How might we better support the commons we are already part of?
The best way to find is to ask. To find commons near you, visit SolidarityEconomy.us. It's not comprehensive, but it's a good start.
Once better connected to the commons you are part of, examine what other aspects of your life can align with the commons. If you tend to shop at a big box store, look into local food co-ops and community supported agriculture (CSA). If you work for an agency, consider how you might transition it to a cooperative with a social-mission prioritizing sustainable practices. If you don't see your company moving in a democratic direction, you can still shift it that way by unionizing. These are tall orders, but then again, these times call for tall orders.
The quickest way to effect change is to work within our spheres of influence. But one person shopping at a food co-op instead of Wal-Mart is just a drop in the bucket. How do we make our individual actions add up to the systemic change we need? In the essay "We Can't Do It Ourselves,"
published in Low Tech Magazine, Kris De Decker explains,
A sustainability policy that focuses on systemic issues reframes the question from “how do we change individuals’ behaviours so that they are more sustainable?” to “how do we change the way society works?”.
De Decker elaborates that,
Addressing the sociotechnical underpinnings of “behaviour” involves attempting to create new infrastructures and institutions that facilitate sustainable lifestyles, attempting to shift cultural conventions that underpin different activities, and attempting to encourage new competences that are required to perform new ways of doing things.
Low Tech Magazine is leading by example by running their website completely off of solar power they themselves produce. I encourage fellow designers and developers to check out the site and the detailed write up on the meticulous, thorough work they did to minimize the site's energy usage and rig up solar panels to power the server the site runs on.
Similarly, however we can, as tech workers we should be building our technology sustainably and in service of regenerative relations.
We must move away from fossil fuels and towards clean, renewable energy. For Agaric, this means getting the hosting providers we partner with to use clean, renewable energy. Most of our sites run on MayFirst, a tech cooperative we are members and leaders within. I started a discussion thread to transition our tech stack to clean, renewable sources . The discussion has become an initiative. Our work is just getting started, but this shows the benefit of meeting our needs democratically.
You can see where your hosting providers stand by using https://ecograder.com
If your host isn't on 100% renewables, talk to them and work to get them there.
You can also choose to go with a hosting provider that's already green, such as https://greenhost.net
Web development trends have lead to massive growth in the size and resource usage of websites. There are counter trends appearing in response - static site generators, lean content strategies and sustainable design. Just as we care about the energy efficiency of our cars, homes and appliances we should care too about the carbon footprint of our websites and apps. Plus, simpler, efficient online tools facilitate easier maintenance and faster load times for users (especially important for those on limited data plans and older devices). As contributors and proponents of Drupal, there is certainly room for improvement within our practice for more sustainable design.
Technology is not neutral. What it is put into service of has tremendous impact. The climate strike demands of the Tech Workers Coalition are a great starting point.
Work to formalize these principles at your own workplace and make them public. If you have contracts with fossil fuel companies or climate deniers work to end them. A great group to link up with around this is ClimateAction.tech
Not all of us work in democratic workplaces. For most of us, creating change in hierarchical startups and corporations is more difficult. Most companies are oriented towards shareholder needs, not worker or planet needs. Unions build counter power, forcing business to take our needs into account.
This is a daunting task. However, worker self-organizing is on the rise, especially in our industry. What starts as an internal petition signed and sent to management, can build to lasting formations that can shift companies to align with the values we need to make the transition.
Joining the Campaign to Organize Digital Employees is a great way to get started.
Transitions aren't easy. They're uncomfortable, they require risk and there's no guarantee of success. Still, fixing our sights on a world of care, balance and right relation with the earth and then making it so, in our home and in our workplace is deeply rewarding.
You may in many ways be without peer, but there are always competitors for the attention of your audience. Identifying top peers and reviewing their respective content helps you get a wider perspective both on what potential listeners, members, and donors will be seeing and what seems to be working for others— we can start thinking together about where to emulate and where to differentiate, informing all of our work together.
Building on the review of peers, Agaric will work with you to briefly interview current and potential clients and develop personas and user stories.
Along with bringing consistency to cooperative output (and saving time sweating the details every time they come up), a good content (copywriting) style guide incorporates suggestions for clear and effective writing and helps your unique aspects shine through. It can help tell your story in a consistent way and help let your individual personalities show through while maintaining collective coherence.
Agaric applies a Lean UX Research methodology to answer critical user experience questions with relevant, meaningful, and actionable data.
From reviewing your goals and audiences we recommend answering the following questions:
We recommend and use the following research and testing approaches:
Not all will fit the purpose or budget of every part of a project, but good insights into what to build and why is more valuable than simply building well and quickly.
We always recommend at least one round dedicated to measuring and improving. Using analytics and user tests, we identify what is working, what is not and needs to be changed, and what is missing and needs to be built. We then build on the previous work to do the fixes and enhancements with the highest expected impact.