I've always had a passion for good design and healthy coding, even back in the days of owning a web site cart in downtown Natick. Back then, my business partner and I made all natural HTML roll-up web sites and, as an incentive for customers to wait in line, we baked Drupal into different flavored designs.
The Drupal became remarkably popular and before we knew it, the Agaric collective was born.
Today, you can enjoy Agaric's many great flavors. They are baked, all natural and totally delicious. So treat yourself well, and treat yourself often.
Visit http://www.agaric.com to find more about our other sites, discover great recipes, and stay in touch.
Dan Hakimzadeh
Co-Founder
Here is how to deal with the surprising-to-impossible-seeming error "Unable to uninstall the MySQL module because: The module 'MySQL' is providing the database driver 'mysql'.."
Like, why is it trying to uninstall anything when you are installing? Well, it is because you are installing with existing configuration— and your configuration is out-of-date. This same problem will happen on configuration import on a Drupal website, too. (See update below for those steps!)
Really this error message is a strong reminder to always run database updates and then commit any resulting configuration changes after updating Drupal core or module code.
And so the solution is to roll back the code to Drupal 9.3, do your installation from configuration, and then run the database updates, export configuration, and commit the result.
For example:
git checkout <commit-hash-of-earlier-composer-lock>
composer install
drush -y site:install drutopia --existing-config
git checkout main
composer install
drush -y updb
drush -y cex
git status # Review what is here; git add -p can also help
git add config/
git commit -m "Apply configuration updates from Drupal 9.4 upgrade"
The system update enable_provider_database_driver
is the post-update
hook that is doing the work here to "Enable the modules that are providing the listed database drivers." Pretty cool feature and a strong reminder to always, always run database updates and commit any configuration changes immediately after any code updates!
This is what you probably already did, before the drush -y cim
failed (luckily, luckily it failed).
composer update
drush -y updb
All that is great! Now continue, not with a config import, but with a config export:
drush -y cex
git status # Review what is here; git add -p can also help
git add config/
git commit -m "Apply configuration updates from Drupal 9.4 upgrade"
Remember, after every composer update and database update, you need to do a configuration export and commit the results— database updates can change configuration, and if you do not commit those, you will undo these intentional and potentially important changes on a configuration import. If you ran into this problem on a configuration import, it is a sign of breakdown in discipline in following these steps!
Every time you bring in code changes with composer update all this must be part and parcel:
composer update
drush -y updb
drush -y cex
git status # Review what is here; git add -p can also help
git add config/
git commit -m "Apply configuration from database updates"
What'd we get wrong? What resources are we missing? Send us a note or tell us in the comments below!
Agaric is excited to announce online training on Drupal migrations and upgrades. In July 2020, we will offer three trainings: Drupal 8/9 content migrations, Upgrading to Drupal 8/9 using the Migrate API, and Getting started with Drupal 9.
We have been providing training for years at Drupal events and privately for clients. At DrupalCon Seattle 2019, our migration training was sold out with 40+ attendees and received very positive feedback. We were scheduled to present two trainings at DrupalCon Minneapolis 2020: one on Drupal migrations and the other on Drupal upgrades. When the conference pivoted to an online event, all trainings were cancelled. To fill the void, we are moving the full training experience online for individuals and organizations who want to learn how to plan and execute successful Drupal migration/upgrade projects.
Drupal is always evolving and the Migrate API is no exception. New features and improvements are added all the time. We regularly update our curriculum to cover the latest changes in the API. This time, both trainings will use Drupal 9 for all the examples! If you are still using Drupal 8, don't worry as the example code is compatible with both major versions of Drupal. We will also cover the differences between Drupal 8 and 9.
In this training you will learn to move content into Drupal 8 and 9 using the Migrate API. An overview of the Extract-Transform-Load (ETL) pattern that migrate implements will be presented. Source, process, and destination plugins will be explained to show how each affects the migration process. By the end of the workshop, you will have a better understanding on how the migrate ecosystem works and the thought process required to plan and perform migrations. All examples will use YAML files to configure migrations. No PHP coding required.
Date: Tuesday, July 21, 2020
Time: 9 AM – 5 PM Eastern time
Cost: $500 USD
In this training you will learn to use the Migrate API to upgrade your Drupal 6/7 site to Drupal 8/9. You will practice different migration strategies, accommodate changes in site architecture, get tips on troubleshooting issues, and much more. After the training, you will know how to plan and execute successful upgrade projects.
Date: Thursday, July 23, 2020
Time: 9 AM – 5 PM Eastern time
Cost: $500 USD
We are also offering a training for people who want to get a solid foundation in Drupal site building. Basic concepts will be explained and put into practice through various exercises. The objective is that someone, who might not even know about Drupal, can understand the different concepts and building blocks to create a website. A simple, fully functional website will be built over the course of the day-long class.
Date: Monday, July 13, 2020
Time: 9 AM – 5 PM Eastern time
Cost: $250 USD
Anyone is eligible for a 15% discount on their second training. Additionally, if you are a member of an under-represented community who cannot afford the full price of the training, we have larger discounts and full scholarship available. Ask Agaric to learn more about them.
We also offer customized training for you or your team's specific needs. Site building, module development, theming, and data migration are some of the topics we cover. Check out our training page or ask Agaric for more details. Custom training can be delivered online or on-site in English or Spanish.
Mauricio Dinarte is a frequent speaker and trainer at conferences around the world. He is passionate about Drupal, teaching, and traveling. Over the last few years, he has presented 30+ sessions and full-day trainings at 20+ DrupalCamps and DrupalCons over America and Europe. In August 2019, he wrote an article every day to share his expertise on Drupal migrations.
We look forward to seeing you online in July at any or all of these trainings!
Louis has been a Linux user since his childhood, although there was a period when he did not want to be free because it was too tricky to get PC games set up in the liberated zone.
As an adult, after deciding on a skill that could pay the rent and leave a little extra Louis bit into Jennifer Robbins' Learning Web Design: A Beginner's Guide to HTML, CSS, JavaScript, and Web Graphics and Marijn Haverbeke's Eloquent Javascript. When he was starting off Louis loved spending non shelf-stocking, fruit cutting, floor mopping, hours solving (or attempting to solve) code challenges. He would spend hours wrestling with problems which he should have given up on much earlier and just learned from the solution.
Professionally, Louis got started working with NOVA Web Development setting up LibreOrganize sites an association management system built with Django. Now with Agaric Louis works developing and configuring Drutopia sites.
Louis is thankful and excited to be the newest member of Agaric. Louis likes worker-coops and exploring the role they play in transitioning to the society of the emancipated worker.
On my quest to improve a client's Drupal site performance I considered installing the Alternative PHP Cache. It reduces the overhead of compiling the PHP sources into opcode on each request by caching the compiled code1. 2bits posted a very good case study about PHP opcode caches a while ago.
I have seen significant performance improvements from opcode caches in past Drupal projects. But every site is different. Usually the relative efficiency of an opcode cache correlates with the share the bootstrap process takes of total page rendering time. This can be easily measured with a profiling tool like Xdebug and a visualization software (Kcachegrind is an excellent free software product, for MacOS X there's MacCallGrind).
Visualization of Xdebug profiling results with MacCallGrind
After the first profiling run, it became clear that the bootstrap process only amounted to about 10% of total page rendering time. Since the full rendering period was an order of magnitude greater than the whole bootstrapping period, any performance improvement in the bootstrapping process would have no perceivable performance impact. Nevertheless I took some time to make some observations. I was especially curious about comparing APC running with apc.stat enabled and disabled.
For the purpose of this test I put the site on a Virtual Box machine running Debian Wheezy: PHP 5.3.10 with Xdebug 2.1.3, MySQL 5.1.58, PHP APC 3.1.2, Apache 2.2.22. Some initial test runs were made to establish the amount of memory APC needed to avoid cache resets, the actual experiment used a value of 128 MB for apc.shm_size. To warm up the APC cache two requests to the page were made before starting each test series. A test series consists of requesting the homepage with profiling enabled and noting the total execution time of drupal_bootstrap and the share it has of the total page rendering time — repeated 10 times.
Execution time of drupal_bootstrap with APC cache disabled
Execution time of drupal_bootstrap with APC cache enabled
The results show a reduction in execution time between 25% and 60% taking the standard deviation into account. Disabling apc.stat had no measurable effect.
1. PHP has been designed for adding dynamic content to web pages by embedding snippets of code (<?php ... ?>) in HTML markup. Still the most popular way of running PHP is by means of Apache with mod_php which is originally written for that use case. Each time a request comes to a page containing PHP that code is parsed and executed in fractions of a second. The growth of the PHP community and increasing complexity of problems being solved with PHP has led to the development of ever more complex software - like Drupal. If you look at the source of Drupal most of the files contain purely PHP with a little bit of HTML here and there, primarily in the themes. Now the cost of the parser loading and compiling the sources into byte code to be executed by the interpreter has become a challenge. An opcode cache like APC saves time because it remembers the compiled code in memory thereby reducing the overhead per request.
In a previous article we explained the syntax used to write Drupal migrations. When migrating into content entities, these define several properties that can be included in the process section to populate their values. For example, when importing nodes you can specify the title, publication status, creation date, etc. In the case of users, you can set the username, password, timezone, etc. Finding out which properties are available for an entity might require some Drupal development knowledge. To make the process easier, in today’s article we are presenting a reference of properties available in content entities provided by Drupal core and some contributed modules.
For each entity we will present: the module that provides it, the class that defines it, and the available properties. For each property we will list its name, field type, a description, and a note if the field allows unlimited values (i.e. it has an unlimited cardinality). The list of properties available for a content entity depend on many factors. For example, if the entity is revisionable (e.g. revision_default), translatable (e.g. langcode), or both (e.g. revision_translation_affected). The modules that are enabled on the site can also affect the available properties. For instance, if the “Workspaces” module is installed, it will add a workspace property to many content entities. This reference assumes that Drupal was installed using the standard installation profile and all modules that provide content entities are enabled.
It is worth noting that entity properties are divided in two categories: base field definitions and field storage configurations. Base field configurations will always be available for the entity. On the other hand, the presence of field storage configurations will depend on various factors. For one, they can only be added to fieldable entities. Attaching the fields to the entity can be done manually by the user, by a module, or by an installation profile. Again, this reference assumes that Drupal was installed using the standard installation profile. Among other things, it adds a user_picture image field to the user entity and body, comment, field_image, and field_tags fields to the node entity. For entities that can have multiple bundles, not all properties provided by the field storage configurations will be available in all bundles. For example, with the standard installation profile all content types will have a body field associated with it, but only the article content type has the field_image, and field_tags fields. If subfields are available for the field type, you can migrate into them.
Module: Node (Drupal Core)
Class: Drupal\node\Entity\Node
Related article: Writing your first Drupal migration
List of base field definitions:
List of field storage configurations:
Module: User (Drupal Core)
Class: Drupal\user\Entity\User
Related articles: Migrating users into Drupal - Part 1 and Migrating users into Drupal - Part 2
List of base field definitions:
List of field storage configurations:
Module: Taxonomy (Drupal Core)
Class: Drupal\taxonomy\Entity\Term
Related article: Migrating taxonomy terms and multivalue fields into Drupal
List of base field definitions:
Module: File (Drupal Core)
Class: Drupal\file\Entity\File
Related articles: Migrating files and images into Drupal, Migrating images using the image_import plugin, and Migrating images using the image_import plugin
List of base field definitions:
Module: Media (Drupal Core)
Class: Drupal\media\Entity\Media
List of base field definitions:
List of field storage configurations:
Module: Comment (Drupal Core)
Class: Drupal\comment\Entity\Comment
List of base field definitions:
List of field storage configurations:
Module: Aggregator (Drupal Core)
Class: Drupal\aggregator\Entity\Feed
List of base field definitions:
Module: Aggregator (Drupal Core)
Class: Drupal\aggregator\Entity\Item
List of base field definitions:
Module: Custom Block (Drupal Core)
Class: Drupal\block_content\Entity\BlockContent
List of base field definitions:
List of field storage configurations:
Module: Contact (Drupal Core)
Class: Drupal\contact\Entity\Message
List of base field definitions:
Module: Content Moderation (Drupal Core)
Class: Drupal\content_moderation\Entity\ContentModerationState
List of base field definitions:
Module: Path alias (Drupal Core)
Class: Drupal\path_alias\Entity\PathAlias
List of base field definitions:
Module: Shortcut (Drupal Core)
Class: Drupal\shortcut\Entity\Shortcut
List of base field definitions:
Module: Workspaces (Drupal Core)
Class: Drupal\workspaces\Entity\Workspace
List of base field definitions:
Module: Custom Menu Links (Drupal Core)
Class: Drupal\menu_link_content\Entity\MenuLinkContent
List of base field definitions:
Module: Paragraphs module
Class: Drupal\paragraphs\Entity\Paragraph
Related article: Introduction to paragraphs migrations in Drupal
List of base field definitions:
List of field storage configurations:
Module: Paragraphs Library (part of paragraphs module)
Class: Drupal\paragraphs_library\Entity\LibraryItem
List of base field definitions:
Module: Profile module
Class: Drupal\profile\Entity\Profile
List of base field definitions:
This reference includes all core content entities and some provided by contributed modules. The next article will include a reference for Drupal Commerce content entities. That being said, it would be impractical to cover all contributed modules. To get a list of yourself for other content entities, load the entity_type.manager service and call its getFieldStorageDefinitions() method passing the machine name of the entity as a parameter. Although this reference only covers content entities, the same process can be used for configuration entities.
What did you learn in today’s article? Did you know that there were so many entity properties in Drupal core? Were you aware that the list of available properties depend on factors like if the entity is fieldable, translatable, and revisionable? Did you know how to find properties for content entities from contributed modules? Please share your answers in the comments. Also, we would be grateful if you shared this article with your friends and colleagues.
Only Section 3 is particularly Drupal/Drush specific, but still might give you a hint about running remote commands that use ssh from behind their shiny CLI.
Since you asked so politely: if you are already familiar with hosting options, bash shell configuration, and Drush, this somewhat lengthy article can be summed up quite quickly, really. For those that might not be pro's in any one of these, you can take this as the quick intro, and keep reading to learn why these situations exist, as well as the detailed fix.
Jump to the "Part" below if you only need help with one or more of these.
Let's start where most people do: setting up the hosting itself. In most cases, a managed hosting provider will have some way to select the appropriate version of PHP for your individual site. If that's the case - pick your target, and you should be good to go (move on to Part 2, you lucky dog)! If you do your own hosting, there will be some additional steps to take, which I'll give an overview of, and some additional resources to get you over this first hurdle.
If you are running "A-PAtCHy" web server (possible name change coming?) you will not be able to use mod_php
for your PHP duties, as this method does not allow the web server to directly serve content of different virtual hosts using different versions of PHP. Instead, I recommend using PHP's "FastCGI Process Manager" service - aka FPM. This is a stand-alone service that Apache and NGINX will speak to using new-age technology from 1996 called FastCGI. It's still technically CGI, only, like, Fast (seriously, it works really well). Your web server hands off to this service with it's related FastCGI/proxy module.
The process is quite similar for both web servers, and an article over at Linode covers the basics of this method for each, but wait! Finish reading at least this paragraph before you jump over there for both a caveat emptor and then some Debian specific derivations, if you need those (the article is Ubuntu-specific). In the article, they utilize the excellent PHP resources offered by Ondřej Surý. From this PPA/Debian APT resource, you can run concurrent installations of any of the following PHP versions (listed as of this writing): 5.6, 7.0, 7.1, 7.2, 7.3, 7.4, 8.0, 8.1, and 8.2. Do keep in mind, however, that versions prior to 8.0 are [per php.net] now past their supported lifetime and no longer actively developed (see also this FAQ for more details). Debian-specific instructions for setting up this repository in apt (as opposed to Ubuntu PPA support) are also found within Ondřej's instructions. The remainder of the Linode process should still apply with that one change. OK, run along and get the PHP basics talking to your web server. If you'll be running Drupal (why wouldn't you?) then you'll want to ensure you have the version-specific php modules that it requires (this is for Drupal 9+, but links to earlier revisions also). I'll wait...
Assuming you have now progressed as far as having both versions of PHP you need installed, and followed the article from Linode above or whatever your favorite substitute source was, you likely noticed the special sauce that connects the web server to a particular PHP FPM service. In Apache, we have: SetHandler "proxy:unix:/var/run/php/php8.0-fpm.sock|fcgi://localhost"
, and in NGINX flavor, it's: fastcgi_pass unix:/var/run/php/php8.0-fpm.sock;
These directives point to the Unix socket as defined for in the default PHP "pool" for the particular version. For reference, these are defined in /etc/php/{version}/fpm/pool.d/www.conf
and therein will look like: listen = /var/run/php/php8.1-fpm.sock
. So - all that's necessary to select your PHP version for your web server is to point to whichever socket location for the version of PHP you want.
The Linode article does not go into handling multiple host names, and I won't go too deep here either as I've already navigated headlong into a bit of a scope-creep iceberg. The quick-and-dirty: for Apache, add another site configuration (as in, add another your_site.conf in /etc/apache/sites-available, and link to it from sites-enabled) repeating the entire VirtualHost
and everything inside it, however, use a different listen port, or the same port, and add the ServerName
directive to specify the unique DNS name. Likewise, with NGINX, except you here you repeat the full server
block in another configuration, and changing the listen
and/or server_name
bits. Oh yeah - you'll probably be changing the folder location of the Drupal installation in there too, that should definitely help reduce some confusion.
Phew - we should have the web server out of the way now!
Next up: PHP on your command line. Here, I'm referring to what you get when you type php
on your command line once logged in (via ssh) as the user that manages the web site. In this section, I'm assuming that per best-practices, there are different users for each site. This method does not help much if you have but one user...though I guess it can help if they're both using the same non-default version of php, in contrast to, say, other users on the server.
When a server has multiple versions of PHP, only one of them at a given time will ever live at the path /usr/bin/php
. Ordinarily, this is what you get when you type just php
at the command line. This also is what you'll get whenever you run a file with a shebang of #!/usr/bin/env php
, meaning if you run drush (or wp-cli, for our Wordpress friends), you'll get whatever PHP is found there as well. You can run which php
if you'd like to see where php is found.
At this point, definitely check php --version
. If you are getting the version you want, you're are done with this article! Well, maybe not - you just want to switch to the account where you do require a different version of PHP than this gives you.
So, php --version
gives one version, but there should be multiple PHP executables on the system now, right? (You should have installed multiple, or else have multiple versions available at this point). So where are those? These can be directly executed by running them using their versions in their name. For example php8.1
or php7.4
. So, what is happening here that we just get "one proper php"? Well, a couple things.
What, the smog? No...well, yes, that's a problem, but some good things happened in 2022. In this instance, our first issue comes from an environment variable. In particular: the venerable PATH
, which contains a list of locations that the shell will use to look for a given executable as you enter a command name (or, again, as specified by a shebang). The PATH variable is a colon-delimited list, typically looking about like this: /usr/local/bin:/usr/bin:/bin:
. The shell simply looks for the first occurrence of your command as it peeks in each directory in sequence (left-to-right). Is there a /usr/local/bin/php
? That's what you'll get. If not, how about /usr/bin/php
? And so on, until it finds one, or else you've mistyped pph
and end up with command not found
instead. You can see your path with echo $PATH
(or try which agaric
. I'm guessing you won't have an agaric
program, this will tell you where it looked for it when it fail to find one).
The second part of this equation is what is going on with the /usr/bin/php
that was found. This alleged "The PHP" is actually a soft link to the current system-level default version of PHP that's installed. You can see how this situation is resolved with a command such as readlink -f /usr/bin/php
. This command basically says "read the symbolic link (recursively, due to the -f
, try it again without the -f
!) and show what it's [ultimately] pointing to". This link (and those it links to) come from an "alternatives" system used by Debian-like systems that connects such things as the canonical name of an executable to a specific installed version. You can learn more about how how this is set up (for PHP, anyway) from...you guessed it: Ondřej's FAQ.
Now, where we have multiple versions of PHP installed, it's generally impractical to change all the shebang lines to something else, but that is technically one way to do things. We're also assuming you want to use multiple versions simultaneously here - so updating via the alternatives system isn't a great option either - if you can even do that. There is a simple method to make this work, even as an unprivileged user: make your own link called php
and make sure the shell can find it in the PATH.
What we will do, is create our own ~/bin
folder, and make our link there. Then, we just make sure that ~/bin
is in our path, and comes before other locations that have a php
file. There's no shortage of places for customizing the PATH (and bash, generally), and quite frankly, since I'm not positive what the canonical location is, I'll happily follow the Debian manual which says ~/.bashrc
. The particular file you'll want to use can be influenced by the type of shell you request (a login vs non-login, and interactive vs non-interactive). In the manual, part of their example looks like this:
# set PATH so it includes user's private bin if it exists
if [ -d ~/bin ] ; then
PATH="~/bin${PATH+:$PATH}"
fi
export PATH
Curious what that odd-looking
${PATH+:$PATH}
syntax is about? Most people just refer to PATH as$PATH
when they want it, like thisPATH=~/bin:$PATH
, right? Well, yes, and that will probably work just fine, but that weird reference does have some smarts. These are something called parameter expansions. Note that in proper bash parlance, the thing we've been calling a variable this whole time is referred to as a parameter. Go figure...that certainly didn't help me find this reference documentation. If you are interested in shell programming (which I clearly think everyone is, or should be) these can be very helpful to know. You'll bump into these when performing various checks and substitutions on varia-er, parameters. Check out the bash documentation to figure this one out.
OK, this looks good! Let's go ahead and add that to the ~/.bashrc
file in your ssh user's home folder. Now, you either need to source the updated file (with . ~/.bashrc
), or just reconnect. Sourcing (abbreviated with that previous .
, otherwise spelled out as source ~/.bashrc
) essentially executes the referenced file in a way that can it can modify the current shell's context. If you were to just run ~/.bashrc
without sourcing it, it happily sets up an environment, but all that gets wiped out when the script ends.
Now, let's get moving with this plan again. Make a bin folder in our ssh user's home folder: mkdir ~/bin
. Finally, link the php version you want in there. Here, I'll do php8.1: ln -s /usr/bin/php8.1 ~/bin/php
. Voila! Now when you run php --version
, you'll get PHP 8.1!
For the impatient: reconnect and skip to the paragraph. For the curious (or when you have time to come back): it turns out our shell has a bit of memory we never expected it to. If you've typed php
sometime earlier in your session, and bash last found that it was the one in /usr/bin/php that came up first, it's now remembered that so it doesn't have to find it again. While you can just re-login - again! - you might also wan to take the reins of your system and try typing hash -d php
(see help hash
to learn what that does - the help command covers shell built-in functionality, like hash). At last, php
really works the way we wanted! No more odd little shell corners hiding dusty references on us.
Finally...despite my droning on, we're making progress! At this point, when you call upon your drush status
, it should actually run without errors (some things under php7.x just don't work now - as expected) and show that it's using the correct php version.
The Drush boffins graced us with doodads called site aliases that allow us to readily send commands to various sites we control. If you don't know about those, you'll have to read up on them first. The prolific Moshe Weitzman (along with some 310 contributors, and counting) didn't give us these because they were getting bored after 20 years of Drupal; They're pretty essential to working with Drupal effectively.
Assuming you have a grasp of aliases under your belt, let's try a Drush command from our local environment to the remote system: drush @test status
. OK - something is wrong here. My Drush just said I'm still using php7.4 and not my beloved 8.1...again! Does Drush have a memory too? No, it just doesn't care how you want your ssh sessions to work; those are left to you and Drush does it's own thing. The ssh connection that Drush opens has it's own context set-up, unlike the one we have when we've ssh'd in interactively. Thankfully, there's a quick fix for this - they key Drush needs to know is how to set the PATH up so it also gets our targeted PHP version. Let's add our modified PATH to the site aliases configuration, so Drush also knows what the cool kids are doing.
live:
host: example.com
paths:
drush-script: /home/example-live/vendor/bin/drush
env-vars:
PATH: /home/example-live/bin:/usr/bin:/bin
root: /home/example-live/web
uri: 'https://example.com/'
user: example-live
Note the specified PATH of /home/example-live/bin:/bin:/usr/bin
places the bin directory in home at the beginning again. At last, when we run drush @test status
, it's telling us the PHP it uses is the one that we use. We can share, Drush. We made it!
That wraps it up for this one. Hopefully you now feel a little more confident you have some tools helping you master your environment. The version of PHP you need is now at your command. Now you can go get busy with your composer updates and other TODO's. By the way, if all this configuration headache is just not for you - check out our hosted Drutopia platform. We run an instance of this distribution, and provide all these niceties as part of our hosting setup, so reach out if interested. Either way, thanks for coming by!
Cities and towns everywhere offer children and adults myriad programs, events, and places for enriching experiences. These activities and services come from various levels and agencies of government—operating schools, libraries, parks, and more—as well as from not-for-profit organizations, civic groups, private educational institutions, and others. However, any given person—say a single parent with three kids—has no time-efficient way of knowing about all of these opportunities.
Cambridge, Massachusetts, took on this problem. No software or website can solve this by itself, but an easily searchable directory with built-in reminders and tools to help keep it up to date makes finding all available opportunities achievable. Developed based on hundreds of hours of research and interviews led by the Cambridge Kids' Council, Find It Cambridge makes it easier for parents and other care-giving adults to find the amazing array of activities, services, and resources that are available for children, youth, and families in Cambridge.
This year, Agaric gave the site a major upgrade and made Find It capabilities freely available for other cities and towns.
If this is of interest to you for your city or region, especially if you work in an afterschool network or are otherwise in the thick of bringing opportunities to children, please get in touch by e-mail, at 1 508 283 3557, or through our contact form!
Sign up below to get (very) occasional updates.
The Views module provides a flexible method for Drupal site builders to present data. On a recent project we needed to filter a view's result set in a way we could not achieve by means of the module's UI. How do you programmatically alter a view's result set before rendering? Let's see how to do it using the hooks provided by the module.
The need surfaced while working on the web site for MIT's Global Studies and Languages department, which uses Views to pull in data from a remote service and display it. The Views module provides a flexible method for Drupal site builders to present data. Most of the time you can configure your presentation needs through the UI using Views and Views-related contributed modules. Notwithstanding, sometimes you need to implement a specific requirement which is not available out of the box. Luckily, Views provides hooks to alter its behavior and results. Let’s see how to filter Views results before they are rendered.
Assume we have a website which aggregates book information from different sources. We store the book name, author, year of publication, and ISBN (International Standard Book Number). ISBNs are unique numerical book identifiers which can be 10 or 13 characters long. The last digit in either version is a verification number and the 13 character version has a 3-character prefix. The other numbers are the same. A book can have both versions. For example:
ISBN10: 1849511160 ISBN13: 9781849511162
In our example website, we only use one ISBN. If both versions are available, the 10-character version is discarded. We do this to prevent duplicate book entries which differ only in ISBN as shown in the following picture:
To remove the duplicate entries, follow this simple two step process:
After reviewing the list of Views hooks, hook_views_pre_render is the one we are going to use to filter results before they are rendered. Now, let’s create a custom module to add the required logic. I have named my module views_alter_results so the hook implementation would look like this:
/** * Implements hook_views_pre_render(). */ function views_alter_results_views_pre_render(&$view) { // Custom code. }
The ampersand in the function parameter indicates that the View object is passed by reference. Any change we make to the object will be kept. The View object has a results property. Using the devel module, we can use dsm($view->results)
to have a quick look at the results.
Each element in the array is a node that will be displayed in the final output. If we expand one of them, we can see more information about the node. Let’s drill down into one of the results until we get to the ISBN.
The output will vary depending on your configuration. In this example, we have created a Book content type and added an ISBN field. Before adding the logic to filter the unwanted results, we need to make sure that this logic will only be applied for the specific view and display we are targeting. By default, hook_views_pre_render will be executed for every view and display unless otherwise instructed. We can apply this restriction as follows:
/** * Implements hook_views_pre_render(). */ function views_alter_results_views_pre_render(&$view) { if ($view->name == 'books' && $view->current_display == 'page_book_list') { // Custom code. } }
Next, the logic to filter results.
/** * Implements hook_views_pre_render(). */ function views_alter_results_views_pre_render(&$view) { if ($view->name == 'books' && $view->current_display == 'page_book_list') { $isbn10_books = array(); $isbn13_books = array(); $remove_books = array(); foreach ($view->result as $index => $value) { $isbn = $value->field_field_isbn[0]['raw']['value']; if (strlen($isbn) === 10) { // [184951116]0. $isbn10_books[$index] = substr($isbn, 0, 9); } elseif (strlen($isbn) === 13) { // 978[184951116]2. $isbn13_books[$index] = substr($isbn, 3, 9); } } // Find books that have both ISBN10 and ISBN13 entries. $remove_books = array_intersect($isbn10_books, $isbn13_books); // Remove repeated books. foreach ($remove_books as $index => $value) { unset($view->result[$index]); } } }
To filter the results we use unset on $view->result. After this process, the result property of the view object will look like this:
And our view will display without duplicates book entries as seen here:
Before wrapping up, I’d like to share two modules that might help you achieve similar results: Views Merge Rows and Views Distinct. Every use case is different, if neither of these modules gets you where you want to be, you can leverage hook_views_pre_render to implement your custom requirements.
As indicated by Leon and efpapado this approach only works for views that present all results in a single page. That was the original use case. The altering presented here only affects the current page and the pager will not work as expected.
Saturday, February 2nd, 1:00 - 4 p.m.
Encuentro5 Community Space
9A Hamilton Place, Boston, MA 02108
encuentro5 (e5), DigBoston, UjimaBoston and Agaric Cooperative invite your participation in an important discussion on Technology and Revolution. The event is part of a series of discussions being held nationwide and coordinated by May First/People Link and the Center for Media Justice—leading up to an international convergence in Mexico City later this year.
Notable participants include: Alfredo Lopez, author, Puerto Rican independista, and co-director of May First/People Link; and Rajesh Kasturirangan, mathematician, cognitive scientist, and professor at the National Institute of Advanced Studies in India.
Over the last few decades, technological advances have not only radically changed methods of human communication but have also started to change humanity itself in ways that grassroots organizations on the political left have been slow to address. To the extent we have done so, it has been mostly to advocate for disenfranchised communities’ access to computers and broadband internet service.
But we have largely failed to grapple with issues beyond the rise of the internet and huge corporate social media platforms like Facebook and Twitter. And we’ve barely scratched the surface of those key changes, let alone put much thought into analyzing the effects of newer technologies like robotics, artificial intelligence, big data, and genetic engineering on our communities. This is all the more alarming because rapid technological has aggravated the inequalities about which the left has traditionally cared.
Nonetheless, social-change movements continuously emerge, often in unexpected spaces, but especially in artistic and youth spaces or from insurgent social movements of the oppressed and exploited. They create campaigns to challenge potentially negative technological developments and propose more helpful community-centered technologies in their place.
In the interest of promoting these movements and their just agendas, this gathering will convene organizers for an afternoon of sharing and thinking together. We will be sharing information and analyses about these topics in short, plain-spoken, manageable conversations. Our thinking together will be strategic, asking and answering straightforward questions:
More information at https://techandrev.org
Today we complete the user migration example. In the previous post, we covered how to migrate email, timezone, username, password, and status. This time, we cover creation date, roles, and profile pictures. The source, destination, and dependencies configurations were explained already. Therefore, we are jumping straight to the process transformations in this entry.
You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD users
whose machine name is ud_migrations_users
. The two migrations to execute are udm_user_pictures
and udm_users
. Notice that both migrations belong to the same module. Refer to this article to learn where the module should be placed.
The example assumes Drupal was installed using the standard
installation profile. Particularly, we depend on a Picture (user_picture
) image field attached to the user entity. The word in parenthesis represents the machine name of the image field.
The explanation below is only for the user migration. It depends on a file migration to get the profile pictures. One motivation to have two migrations is for the images to be deleted if the file migration is rolled back. Note that other techniques exist for migrating images without having to create a separate migration. We have covered two of them in the articles about subfields and constants and pseudofields.
Have a look at the previous post for details on the source values. For reference, the user creation time is provided by the member_since
column, and one of the values is April 4, 2014
. The following snippet shows how the various user date related properties are set:
created:
plugin: format_date
source: member_since
from_format: 'F j, Y'
to_format: 'U'
changed: '@created'
access: '@created'
login: '@created'
The created
, entity property stores a UNIX timestamp of when the user was added to Drupal. The value itself is an integer number representing the number of seconds since the epoch. For example, 280299600
represents Sun, 19 Nov 1978 05:00:00 GMT
. Kudos to the readers who knew this is Drupal's default expire
HTTP header. Bonus points if you knew it was chosen in honor of someone’s birthdate. ;-)
Back to the migration, you need to transform the provided date from Month day, year
format to a UNIX timestamp. To do this, you use the format_date
plugin. The from_format
is set to F j, Y
which means your source date consists of:
April
.4
.2014
If the value of from_format
does not make sense, you are not alone. It is actually assembled from format characters of the date
PHP function. When you need to specify the from
and to
formats, you basically need to look at the documentation and assemble a string that matches the desired date format. You need to pay close attention because upper and lowercase letters represent different things like Y
and y
for the year with four-digits versus two-digits respectively. Some date components have subtle variations like d
and j
for the day with or without leading zeros respectively. Also, take into account white spaces and date component separators. To finish the plugin configuration, you need to set the to_format
configuration to something that produces a UNIX timestamp. If you look again at the documentation, you will see that U
does the job.
The changed
, access
, and login
entity properties are also dates in UNIX timestamp format. changed
indicates when the user account was last updated. access
indicates when the user last accessed the site. login
indicated when the user last logged in. For brevity, the same value assigned to created
is also assigned to these three entity properties. The at sign (@) means copy the value of a previous mapping in the process pipeline. If needed, each property can be set to a different value or left unassigned. None is actually required.
For reference, the roles are provided by the user_roles
column, and one of the values is forum moderator, forum admin
. It is a comma separated list of roles from the legacy system which need to be mapped to Drupal roles. It is possible that the user_roles
column is not provided at all in the source. The following snippet shows how the roles are set:
roles:
- plugin: skip_on_empty
method: process
source: user_roles
- plugin: explode
delimiter: ','
- plugin: callback
callable: trim
- plugin: static_map
map:
'forum admin': administrator
'webmaster': administrator
default_value: null
First, the skip_on_empty
plugin is used to skip the processing of the roles if the source column is missing. Then, the explode
plugin is used to break the list into an array of strings representing the roles. Next, the callback
plugin invokes the trim
PHP function to remove any leading or trailing whitespace from the role names. Finally, the static_map
plugin is used to manually map values from the legacy system to Drupal roles. All of these plugins have been explained previously. Refer to other articles in the series or the plugin documentation for details on how to use and configure them.
There are some things that are worth mentioning about migrating roles using this particular process pipeline. If the comma separated list includes spaces before or after the role name, you need to trim the value because the static map will perform an equality check. Having extraneous space characters will produce a mismatch.
Also, you do not need to map the anonymous
or authenticated
roles. Drupal users are assumed to be authenticated
and cannot be anonymous
. Any other role needs to be mapped manually to its machine name. You can find the machine name of any role in its edit page. In the example, only two out of four roles are mapped. Any role that is not found in the static map will be assigned the value null
as indicated in the default_value
configuration. After processing the null
value will be ignored, and no role will be assigned. But you could use this feature to assign a default role in case the static map does not produce a match.
For reference, the profile picture is provided by the user_photo
column, and one of the values is P01
. This value corresponds to the unique identifier of one record in the udm_user_pictures
file migration, which is part of the same demo module. It is important to note that the user_picture
field is not a user entity property. The field is created by the standard
installation profile and attached to the user entity. You can find its configuration in the “Manage fields” tab of the “Account settings” configuration page at /admin/config/people/accounts
. The following snippet shows how profile pictures are set:
user_picture/target_id:
plugin: migration_lookup
migration: udm_user_pictures
source: user_photo
Image fields are entity references. Their target_id
property needs to be an integer number containing the file id (fid
) of the image. This can be obtained using the migration_lookup
plugin. Details on how to configure it can be found in this article. You could simply use user_picture
as your field mapping because target_id
is the default subfield and could be omitted. Also note that the alt
subfield is not mapped. If present, its value will be used for the alternative text of the image. But if it is not specified, like in this example, Drupal will automatically generate an alternative text out of the username. An example value would be: Profile picture for user michele
.
Technical note: The user entity contains other properties you can write to. For a list of available options, check the baseFieldDefinitions() method of the User class defining the entity. Note that more properties can be available up in the class hierarchy.
And with that, we wrap up the user migration example. We covered how to migrate a user’s mail, timezone, username, password, status, creation date, roles, and profile picture. Along the way, we presented various process plugins that had not been used previously in the series. We showed a couple of examples of process plugin chaining to make sure the migrated data is valid and in the format expected by Drupal.
What did you learn in today’s blog post? Did you know how to process dates for user entity properties? Have you migrated user roles before? Did you know how to import profile pictures? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.
Next: Migrating dates into Drupal
This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.