Here are all the links from the slide Micky told you not to try to write everything down from.
Agarics are members of a few networks and movements both local and global:
And some that didn't make the slides, that other Agarics are a part of:
At noon on Wednesday July 22nd, 2015, Richard Stallman (RMS) and Noam Chomsky met for the first time. We met in Noam's office, located in the Stata building on the MIT campus in Cambridge, Massachusetts, where Richard also has an office. We sat down to discuss the Free Software Movement, Digital Restrictions Management, then we briefly touched on domain name seizures, workers rights and worker-cooperatives. The atmosphere was jovial yet serious, and the discussion was soon underway.
Noam asked, what is the Free Software Foundation and movement all about? Richard responded that The Free Software Foundation is a nonprofit with a worldwide mission to promote computer user freedom and to defend the rights of all free software users. We raise awareness about computing privacy issues and users rights, by exposing things like Digital Restrictions Management. The movement is our outreach to users seeking adoption of free software and informing the network on issues that support free software and working to change attitudes and support laws that make free software remain free.
RMS then took the time to explain Digital Restrictions Management and a little known practice called domain name seizure. The US government can arbitrarily take a domain away from the domain owner without court approval. There are many reasons a domain may be seized, If your domain ends with .com, no matter what country you are based in, you are subject to VeriSign helping the U.S. Government seize your domain.
On the topic of Digital Restrictions Management, Richard detailed how it prevents people from being good neighbors and sharing files. DRM is often written as "Digital Rights Management", but this is misleading, since it refers to systems that are designed to take away and limit your rights.
Below is a quote by each and a few links to sites with background information:
Richard Stallman
"Isn't it ironic that the proprietary software developers call us communists? We are the ones who have provided for a free market, where they allow only monopoly. … if the user chooses this proprietary software package, he then falls into this monopoly for support … the only way to escape from monopoly is to escape from proprietary software, and that is what the free software movement is all about. We want you to escape and our work is to help you escape. We hope you will escape to the free world."
Richard Stallman websites:
https://en.wikipedia.org/wiki/Richard_Stallman
http://fsf.org
https://gnu.org
https://stallman.org
Noam Chomsky
"How people themselves perceive what they are doing is not a question that interests me. I mean, there are very few people who are going to look into the mirror and say, 'That person I see is a savage monster'; instead, they make up some construction that justifies what they do. If you ask the CEO of some major corporation what he does he will say, in all honesty, that he is slaving 20 hours a day to provide his customers with the best goods or services he can and creating the best possible working conditions for his employees. But then you take a look at what the corporation does, the effect of its legal structure, the vast inequalities in pay and conditions, and you see the reality is something far different."
Noam Chomsky websites:
https://en.wikipedia.org/wiki/Noam_Chomsky
http://web.mit.edu/linguistics/people/faculty/chomsky
http://www.chomsky.info
At the end of our first meeting, Noam suggested that we take a look at congress and see who is voting against the TPP, and then contact those congress members with information about free software initiatives. We all realized that we barely had time to scratch the surface of the talking points we need to cover and agreed to meet again and discuss overlapping issues that are of concern to both the labor movement and the free software movement. We will be targeting issues that have solutions ready to be implemented, and making sure we are all aware of the dangers of proprietary software.
The next meeting will be sometime in the next few months... This is a beginning.
Agaric is excited to announce online training on Drupal migrations and upgrades. In July 2020, we will offer three trainings: Drupal 8/9 content migrations, Upgrading to Drupal 8/9 using the Migrate API, and Getting started with Drupal 9.
We have been providing training for years at Drupal events and privately for clients. At DrupalCon Seattle 2019, our migration training was sold out with 40+ attendees and received very positive feedback. We were scheduled to present two trainings at DrupalCon Minneapolis 2020: one on Drupal migrations and the other on Drupal upgrades. When the conference pivoted to an online event, all trainings were cancelled. To fill the void, we are moving the full training experience online for individuals and organizations who want to learn how to plan and execute successful Drupal migration/upgrade projects.
Drupal is always evolving and the Migrate API is no exception. New features and improvements are added all the time. We regularly update our curriculum to cover the latest changes in the API. This time, both trainings will use Drupal 9 for all the examples! If you are still using Drupal 8, don't worry as the example code is compatible with both major versions of Drupal. We will also cover the differences between Drupal 8 and 9.
In this training you will learn to move content into Drupal 8 and 9 using the Migrate API. An overview of the Extract-Transform-Load (ETL) pattern that migrate implements will be presented. Source, process, and destination plugins will be explained to show how each affects the migration process. By the end of the workshop, you will have a better understanding on how the migrate ecosystem works and the thought process required to plan and perform migrations. All examples will use YAML files to configure migrations. No PHP coding required.
Date: Tuesday, July 21, 2020
Time: 9 AM – 5 PM Eastern time
Cost: $500 USD
In this training you will learn to use the Migrate API to upgrade your Drupal 6/7 site to Drupal 8/9. You will practice different migration strategies, accommodate changes in site architecture, get tips on troubleshooting issues, and much more. After the training, you will know how to plan and execute successful upgrade projects.
Date: Thursday, July 23, 2020
Time: 9 AM – 5 PM Eastern time
Cost: $500 USD
We are also offering a training for people who want to get a solid foundation in Drupal site building. Basic concepts will be explained and put into practice through various exercises. The objective is that someone, who might not even know about Drupal, can understand the different concepts and building blocks to create a website. A simple, fully functional website will be built over the course of the day-long class.
Date: Monday, July 13, 2020
Time: 9 AM – 5 PM Eastern time
Cost: $250 USD
Anyone is eligible for a 15% discount on their second training. Additionally, if you are a member of an under-represented community who cannot afford the full price of the training, we have larger discounts and full scholarship available. Ask Agaric to learn more about them.
We also offer customized training for you or your team's specific needs. Site building, module development, theming, and data migration are some of the topics we cover. Check out our training page or ask Agaric for more details. Custom training can be delivered online or on-site in English or Spanish.
Mauricio Dinarte is a frequent speaker and trainer at conferences around the world. He is passionate about Drupal, teaching, and traveling. Over the last few years, he has presented 30+ sessions and full-day trainings at 20+ DrupalCamps and DrupalCons over America and Europe. In August 2019, he wrote an article every day to share his expertise on Drupal migrations.
We look forward to seeing you online in July at any or all of these trainings!
Learn to move content to Drupal 8 using the Migrate module without writing a single line of PHP. This training is aimed at site builders who will learn to combine various core and contributed modules and write YAML files to accomplish content migrations. No prior experience with the Migrate module is required.
Source, process, and destination plugins will be explained to learn how each affect the migration. By the end of the session, you will have a better understanding on how the Migrate module works and the thought process required to plan and perform migrations.
Note: Although no prior Migrate module knowledge is required, it is expected that you have a basic understanding of nodes, content types, and fields. You can learn about these and other Drupal concepts by watching this session recording https://www.youtube.com/watch?v=02fvLzPSIjc
A working Drupal 8 local installation is required. Attendees need to feel comfortable working with the command line. They also need composer and be able to install Drupal modules. Drush needs to be installed in order to run migrations from the command line. Xdebug and PHPStorm are used for the debugging example (techniques apply for other debuggers and IDEs). It is highly recommended to use DrupalVM and configure it to use the Drupal composer template.
Follow the quickstart guide to install DrupalVM with the xdebug extra package. Install the following contrib modules: Address, Entity reference revisions, Migrate plus, Migrate source csv, Migrate tools, and Paragraphs. Assistance can be provided before the training starts, but it is better to come with your local environment already set up.
See the Agaric's migration training resources for more.
Louis has been a Linux user since his childhood, although there was a period when he did not want to be free because it was too tricky to get PC games set up in the liberated zone.
As an adult, after deciding on a skill that could pay the rent and leave a little extra Louis bit into Jennifer Robbins' Learning Web Design: A Beginner's Guide to HTML, CSS, JavaScript, and Web Graphics and Marijn Haverbeke's Eloquent Javascript. When he was starting off Louis loved spending non shelf-stocking, fruit cutting, floor mopping, hours solving (or attempting to solve) code challenges. He would spend hours wrestling with problems which he should have given up on much earlier and just learned from the solution.
Professionally, Louis got started working with NOVA Web Development setting up LibreOrganize sites an association management system built with Django. Now with Agaric Louis works developing and configuring Drutopia sites.
Louis is thankful and excited to be the newest member of Agaric. Louis likes worker-coops and exploring the role they play in transitioning to the society of the emancipated worker.
LibrePlanet is an annual conference hosted by the Free Software Foundation for free software enthusiasts and anyone who cares about the intersection of technology and social justice. We've attended and spoken at LibrePlanet many times over the year. This year's theme is "Trailblazing Free Software" and in that spirit Micky is speaking on the Orwellian future that has arrived and what tech justice movements we should be supporting and joining to fight for a freedom-loving, solidarity-based future.
LibrePlanet Keynote: How can we prevent the Orwellian 1984 digital world?
Sunday, March 24th
5:15pm-6:00pm
Stata Center, Massachusetts Institute of Technology Room 32-123
Cambridge, MA
We are living in a society where -- as mere individuals -- it seems out of our control and in the hands of those who have the power to publish and distribute information swiftly and widely, or who can refuse to publish or distribute information. Algorithms now sort us into Global databases like PRISM or ECHELON, and there are devices such as StingRay cell phone trackers used to categorize our every movement. We may build our own profiles online, but we do not have access to the meta-profile built by the corporate entities that our queries traverse as we navigate online, purchasing goods and services as well as logging into sites where we have accounts. The level of intrusion into our most private thoughts should be alarming, yet most fail to heed the call as they feel small, alone, and unable to defy the scrutiny of disapproval from the powers that govern societal norms and their peers. Together, we can change this.
Micky will engage your mind on a journey to open an ongoing discussion to rediscover and reawaken your own creative thought processes. Together, we build a conversation that should never end as it will join us together transparently maintaining our freedoms, with free software as the foundation. Where do we find our personal power, and how do we use it as developers? Do we have a collective goal? Have you checked your social credit rating lately? Others have.
After years of giving a terrible initial experience to people who want to share their first project on Drupal.org, the Project Applications Process Revamp is a Drupal Association key priority for the first part of 2017.
A plan for incentivizing code review of every project, not just new ones, after the project applications revamp is open for suggestions and feedback.
Which makes it excellent timing that right now you can get credit on your Drupal.org profile and that of your organization, boosting marketplace ranking, for reviewing the year-old backlog of project applications requesting review. The focus is on security review for these project applications, but if you want to give a thorough review and then give your thoughts on how project reviews (for any project that opts in to this quality marker) should be performed and rewarded going forward, now's the time and here's the pressing need.
In a previous article we explained the syntax used to write Drupal migrations. When migrating into content entities, these define several properties that can be included in the process section to populate their values. For example, when importing nodes you can specify the title, publication status, creation date, etc. In the case of users, you can set the username, password, timezone, etc. Finding out which properties are available for an entity might require some Drupal development knowledge. To make the process easier, in today’s article we are presenting a reference of properties available in content entities provided by Drupal core and some contributed modules.
For each entity we will present: the module that provides it, the class that defines it, and the available properties. For each property we will list its name, field type, a description, and a note if the field allows unlimited values (i.e. it has an unlimited cardinality). The list of properties available for a content entity depend on many factors. For example, if the entity is revisionable (e.g. revision_default), translatable (e.g. langcode), or both (e.g. revision_translation_affected). The modules that are enabled on the site can also affect the available properties. For instance, if the “Workspaces” module is installed, it will add a workspace property to many content entities. This reference assumes that Drupal was installed using the standard installation profile and all modules that provide content entities are enabled.
It is worth noting that entity properties are divided in two categories: base field definitions and field storage configurations. Base field configurations will always be available for the entity. On the other hand, the presence of field storage configurations will depend on various factors. For one, they can only be added to fieldable entities. Attaching the fields to the entity can be done manually by the user, by a module, or by an installation profile. Again, this reference assumes that Drupal was installed using the standard installation profile. Among other things, it adds a user_picture image field to the user entity and body, comment, field_image, and field_tags fields to the node entity. For entities that can have multiple bundles, not all properties provided by the field storage configurations will be available in all bundles. For example, with the standard installation profile all content types will have a body field associated with it, but only the article content type has the field_image, and field_tags fields. If subfields are available for the field type, you can migrate into them.
Module: Node (Drupal Core)
Class: Drupal\node\Entity\Node
Related article: Writing your first Drupal migration
List of base field definitions:
List of field storage configurations:
Module: User (Drupal Core)
Class: Drupal\user\Entity\User
Related articles: Migrating users into Drupal - Part 1 and Migrating users into Drupal - Part 2
List of base field definitions:
List of field storage configurations:
Module: Taxonomy (Drupal Core)
Class: Drupal\taxonomy\Entity\Term
Related article: Migrating taxonomy terms and multivalue fields into Drupal
List of base field definitions:
Module: File (Drupal Core)
Class: Drupal\file\Entity\File
Related articles: Migrating files and images into Drupal, Migrating images using the image_import plugin, and Migrating images using the image_import plugin
List of base field definitions:
Module: Media (Drupal Core)
Class: Drupal\media\Entity\Media
List of base field definitions:
List of field storage configurations:
Module: Comment (Drupal Core)
Class: Drupal\comment\Entity\Comment
List of base field definitions:
List of field storage configurations:
Module: Aggregator (Drupal Core)
Class: Drupal\aggregator\Entity\Feed
List of base field definitions:
Module: Aggregator (Drupal Core)
Class: Drupal\aggregator\Entity\Item
List of base field definitions:
Module: Custom Block (Drupal Core)
Class: Drupal\block_content\Entity\BlockContent
List of base field definitions:
List of field storage configurations:
Module: Contact (Drupal Core)
Class: Drupal\contact\Entity\Message
List of base field definitions:
Module: Content Moderation (Drupal Core)
Class: Drupal\content_moderation\Entity\ContentModerationState
List of base field definitions:
Module: Path alias (Drupal Core)
Class: Drupal\path_alias\Entity\PathAlias
List of base field definitions:
Module: Shortcut (Drupal Core)
Class: Drupal\shortcut\Entity\Shortcut
List of base field definitions:
Module: Workspaces (Drupal Core)
Class: Drupal\workspaces\Entity\Workspace
List of base field definitions:
Module: Custom Menu Links (Drupal Core)
Class: Drupal\menu_link_content\Entity\MenuLinkContent
List of base field definitions:
Module: Paragraphs module
Class: Drupal\paragraphs\Entity\Paragraph
Related article: Introduction to paragraphs migrations in Drupal
List of base field definitions:
List of field storage configurations:
Module: Paragraphs Library (part of paragraphs module)
Class: Drupal\paragraphs_library\Entity\LibraryItem
List of base field definitions:
Module: Profile module
Class: Drupal\profile\Entity\Profile
List of base field definitions:
This reference includes all core content entities and some provided by contributed modules. The next article will include a reference for Drupal Commerce content entities. That being said, it would be impractical to cover all contributed modules. To get a list of yourself for other content entities, load the entity_type.manager service and call its getFieldStorageDefinitions() method passing the machine name of the entity as a parameter. Although this reference only covers content entities, the same process can be used for configuration entities.
What did you learn in today’s article? Did you know that there were so many entity properties in Drupal core? Were you aware that the list of available properties depend on factors like if the entity is fieldable, translatable, and revisionable? Did you know how to find properties for content entities from contributed modules? Please share your answers in the comments. Also, we would be grateful if you shared this article with your friends and colleagues.
Only Section 3 is particularly Drupal/Drush specific, but still might give you a hint about running remote commands that use ssh from behind their shiny CLI.
Since you asked so politely: if you are already familiar with hosting options, bash shell configuration, and Drush, this somewhat lengthy article can be summed up quite quickly, really. For those that might not be pro's in any one of these, you can take this as the quick intro, and keep reading to learn why these situations exist, as well as the detailed fix.
Jump to the "Part" below if you only need help with one or more of these.
Let's start where most people do: setting up the hosting itself. In most cases, a managed hosting provider will have some way to select the appropriate version of PHP for your individual site. If that's the case - pick your target, and you should be good to go (move on to Part 2, you lucky dog)! If you do your own hosting, there will be some additional steps to take, which I'll give an overview of, and some additional resources to get you over this first hurdle.
If you are running "A-PAtCHy" web server (possible name change coming?) you will not be able to use mod_php
for your PHP duties, as this method does not allow the web server to directly serve content of different virtual hosts using different versions of PHP. Instead, I recommend using PHP's "FastCGI Process Manager" service - aka FPM. This is a stand-alone service that Apache and NGINX will speak to using new-age technology from 1996 called FastCGI. It's still technically CGI, only, like, Fast (seriously, it works really well). Your web server hands off to this service with it's related FastCGI/proxy module.
The process is quite similar for both web servers, and an article over at Linode covers the basics of this method for each, but wait! Finish reading at least this paragraph before you jump over there for both a caveat emptor and then some Debian specific derivations, if you need those (the article is Ubuntu-specific). In the article, they utilize the excellent PHP resources offered by Ondřej Surý. From this PPA/Debian APT resource, you can run concurrent installations of any of the following PHP versions (listed as of this writing): 5.6, 7.0, 7.1, 7.2, 7.3, 7.4, 8.0, 8.1, and 8.2. Do keep in mind, however, that versions prior to 8.0 are [per php.net] now past their supported lifetime and no longer actively developed (see also this FAQ for more details). Debian-specific instructions for setting up this repository in apt (as opposed to Ubuntu PPA support) are also found within Ondřej's instructions. The remainder of the Linode process should still apply with that one change. OK, run along and get the PHP basics talking to your web server. If you'll be running Drupal (why wouldn't you?) then you'll want to ensure you have the version-specific php modules that it requires (this is for Drupal 9+, but links to earlier revisions also). I'll wait...
Assuming you have now progressed as far as having both versions of PHP you need installed, and followed the article from Linode above or whatever your favorite substitute source was, you likely noticed the special sauce that connects the web server to a particular PHP FPM service. In Apache, we have: SetHandler "proxy:unix:/var/run/php/php8.0-fpm.sock|fcgi://localhost"
, and in NGINX flavor, it's: fastcgi_pass unix:/var/run/php/php8.0-fpm.sock;
These directives point to the Unix socket as defined for in the default PHP "pool" for the particular version. For reference, these are defined in /etc/php/{version}/fpm/pool.d/www.conf
and therein will look like: listen = /var/run/php/php8.1-fpm.sock
. So - all that's necessary to select your PHP version for your web server is to point to whichever socket location for the version of PHP you want.
The Linode article does not go into handling multiple host names, and I won't go too deep here either as I've already navigated headlong into a bit of a scope-creep iceberg. The quick-and-dirty: for Apache, add another site configuration (as in, add another your_site.conf in /etc/apache/sites-available, and link to it from sites-enabled) repeating the entire VirtualHost
and everything inside it, however, use a different listen port, or the same port, and add the ServerName
directive to specify the unique DNS name. Likewise, with NGINX, except you here you repeat the full server
block in another configuration, and changing the listen
and/or server_name
bits. Oh yeah - you'll probably be changing the folder location of the Drupal installation in there too, that should definitely help reduce some confusion.
Phew - we should have the web server out of the way now!
Next up: PHP on your command line. Here, I'm referring to what you get when you type php
on your command line once logged in (via ssh) as the user that manages the web site. In this section, I'm assuming that per best-practices, there are different users for each site. This method does not help much if you have but one user...though I guess it can help if they're both using the same non-default version of php, in contrast to, say, other users on the server.
When a server has multiple versions of PHP, only one of them at a given time will ever live at the path /usr/bin/php
. Ordinarily, this is what you get when you type just php
at the command line. This also is what you'll get whenever you run a file with a shebang of #!/usr/bin/env php
, meaning if you run drush (or wp-cli, for our Wordpress friends), you'll get whatever PHP is found there as well. You can run which php
if you'd like to see where php is found.
At this point, definitely check php --version
. If you are getting the version you want, you're are done with this article! Well, maybe not - you just want to switch to the account where you do require a different version of PHP than this gives you.
So, php --version
gives one version, but there should be multiple PHP executables on the system now, right? (You should have installed multiple, or else have multiple versions available at this point). So where are those? These can be directly executed by running them using their versions in their name. For example php8.1
or php7.4
. So, what is happening here that we just get "one proper php"? Well, a couple things.
What, the smog? No...well, yes, that's a problem, but some good things happened in 2022. In this instance, our first issue comes from an environment variable. In particular: the venerable PATH
, which contains a list of locations that the shell will use to look for a given executable as you enter a command name (or, again, as specified by a shebang). The PATH variable is a colon-delimited list, typically looking about like this: /usr/local/bin:/usr/bin:/bin:
. The shell simply looks for the first occurrence of your command as it peeks in each directory in sequence (left-to-right). Is there a /usr/local/bin/php
? That's what you'll get. If not, how about /usr/bin/php
? And so on, until it finds one, or else you've mistyped pph
and end up with command not found
instead. You can see your path with echo $PATH
(or try which agaric
. I'm guessing you won't have an agaric
program, this will tell you where it looked for it when it fail to find one).
The second part of this equation is what is going on with the /usr/bin/php
that was found. This alleged "The PHP" is actually a soft link to the current system-level default version of PHP that's installed. You can see how this situation is resolved with a command such as readlink -f /usr/bin/php
. This command basically says "read the symbolic link (recursively, due to the -f
, try it again without the -f
!) and show what it's [ultimately] pointing to". This link (and those it links to) come from an "alternatives" system used by Debian-like systems that connects such things as the canonical name of an executable to a specific installed version. You can learn more about how how this is set up (for PHP, anyway) from...you guessed it: Ondřej's FAQ.
Now, where we have multiple versions of PHP installed, it's generally impractical to change all the shebang lines to something else, but that is technically one way to do things. We're also assuming you want to use multiple versions simultaneously here - so updating via the alternatives system isn't a great option either - if you can even do that. There is a simple method to make this work, even as an unprivileged user: make your own link called php
and make sure the shell can find it in the PATH.
What we will do, is create our own ~/bin
folder, and make our link there. Then, we just make sure that ~/bin
is in our path, and comes before other locations that have a php
file. There's no shortage of places for customizing the PATH (and bash, generally), and quite frankly, since I'm not positive what the canonical location is, I'll happily follow the Debian manual which says ~/.bashrc
. The particular file you'll want to use can be influenced by the type of shell you request (a login vs non-login, and interactive vs non-interactive). In the manual, part of their example looks like this:
# set PATH so it includes user's private bin if it exists
if [ -d ~/bin ] ; then
PATH="~/bin${PATH+:$PATH}"
fi
export PATH
Curious what that odd-looking
${PATH+:$PATH}
syntax is about? Most people just refer to PATH as$PATH
when they want it, like thisPATH=~/bin:$PATH
, right? Well, yes, and that will probably work just fine, but that weird reference does have some smarts. These are something called parameter expansions. Note that in proper bash parlance, the thing we've been calling a variable this whole time is referred to as a parameter. Go figure...that certainly didn't help me find this reference documentation. If you are interested in shell programming (which I clearly think everyone is, or should be) these can be very helpful to know. You'll bump into these when performing various checks and substitutions on varia-er, parameters. Check out the bash documentation to figure this one out.
OK, this looks good! Let's go ahead and add that to the ~/.bashrc
file in your ssh user's home folder. Now, you either need to source the updated file (with . ~/.bashrc
), or just reconnect. Sourcing (abbreviated with that previous .
, otherwise spelled out as source ~/.bashrc
) essentially executes the referenced file in a way that can it can modify the current shell's context. If you were to just run ~/.bashrc
without sourcing it, it happily sets up an environment, but all that gets wiped out when the script ends.
Now, let's get moving with this plan again. Make a bin folder in our ssh user's home folder: mkdir ~/bin
. Finally, link the php version you want in there. Here, I'll do php8.1: ln -s /usr/bin/php8.1 ~/bin/php
. Voila! Now when you run php --version
, you'll get PHP 8.1!
For the impatient: reconnect and skip to the paragraph. For the curious (or when you have time to come back): it turns out our shell has a bit of memory we never expected it to. If you've typed php
sometime earlier in your session, and bash last found that it was the one in /usr/bin/php that came up first, it's now remembered that so it doesn't have to find it again. While you can just re-login - again! - you might also wan to take the reins of your system and try typing hash -d php
(see help hash
to learn what that does - the help command covers shell built-in functionality, like hash). At last, php
really works the way we wanted! No more odd little shell corners hiding dusty references on us.
Finally...despite my droning on, we're making progress! At this point, when you call upon your drush status
, it should actually run without errors (some things under php7.x just don't work now - as expected) and show that it's using the correct php version.
The Drush boffins graced us with doodads called site aliases that allow us to readily send commands to various sites we control. If you don't know about those, you'll have to read up on them first. The prolific Moshe Weitzman (along with some 310 contributors, and counting) didn't give us these because they were getting bored after 20 years of Drupal; They're pretty essential to working with Drupal effectively.
Assuming you have a grasp of aliases under your belt, let's try a Drush command from our local environment to the remote system: drush @test status
. OK - something is wrong here. My Drush just said I'm still using php7.4 and not my beloved 8.1...again! Does Drush have a memory too? No, it just doesn't care how you want your ssh sessions to work; those are left to you and Drush does it's own thing. The ssh connection that Drush opens has it's own context set-up, unlike the one we have when we've ssh'd in interactively. Thankfully, there's a quick fix for this - they key Drush needs to know is how to set the PATH up so it also gets our targeted PHP version. Let's add our modified PATH to the site aliases configuration, so Drush also knows what the cool kids are doing.
live:
host: example.com
paths:
drush-script: /home/example-live/vendor/bin/drush
env-vars:
PATH: /home/example-live/bin:/usr/bin:/bin
root: /home/example-live/web
uri: 'https://example.com/'
user: example-live
Note the specified PATH of /home/example-live/bin:/bin:/usr/bin
places the bin directory in home at the beginning again. At last, when we run drush @test status
, it's telling us the PHP it uses is the one that we use. We can share, Drush. We made it!
That wraps it up for this one. Hopefully you now feel a little more confident you have some tools helping you master your environment. The version of PHP you need is now at your command. Now you can go get busy with your composer updates and other TODO's. By the way, if all this configuration headache is just not for you - check out our hosted Drutopia platform. We run an instance of this distribution, and provide all these niceties as part of our hosting setup, so reach out if interested. Either way, thanks for coming by!
Timeline of trainings
Our relationship with technology is largely toxic- think Volkswagen cheating, Facebook spying, Uber being Uber. Tech is ruled by the elite and we are mostly at its mercy. As powerful movements have emerged challenging predatory power structures, let us do the same with technology.
Free/Open Source Software movements offer an alternative to corporate, predatory, proprietary technology. And yet Free Software still reflects many of these same oppressive relationships. One way to change that is with accountability.
Free Software means that anyone is free to read, use and remix the code that the software was written in. This helps with accountability because, unlike proprietary software, experts and community members can audit the code for security flaws and disingenuous functionality. However, there are several limitations with free software-
Only a small percentage of the world can code. An even smaller percentage have the time to write code for Free Software and an even smaller number have the time and expertise in any given project. This coder-centric framework also diminishes the many other skills essential to software: design, user research, project management, documentation, training, outreach to name a few.
As a major survey lead by GitHub supports (and comes as little surprise), the Free Software community is mostly white, male, cisgendered, financially well off, formally educated, able-bodied, straight, English speakers and citizens of Global North countries.
This means that the same groups of people designing and building proprietary software are also building Free Software. It means that despite its open licensing, the Free Software movement maintains the status quo of white supremacy, patriarchy and capitalism.
For Free Software to truly be free - to be free for anyone to build and use, we need to radically restructure our projects. It means building diverse communities where we are accountable to one another.
Many free software projects have already begun this work. Just a few examples-- Rust crafting and enforcing a thoughtful code of conduct, Ghost valuing design and user research throughout their work, Backdrop governing projects democratically and mentoring new contributors.
When we embody inclusion and accountability we grow vibrant communities building and using software that offers a clear alternative to the corporate, proprietary software; a software we can truly call free.
Find It Cambridge is an online resource that empowers families, youth, and those who support them to easily find activities, services, and resources in Cambridge, Massachusetts. It serves as a one-stop-shop website for those who live and work in Cambridge.
Agaric led development and functionality-focused consulting on this project from its inception in 2016. In 2020, we upgraded the site and made the program locator and event finder platform that powers it available to all communities.
Building an event calendar and program directory central to people’s lives is challenging. City governments are notorious for silos and redundancy. The City of Cambridge was determined to do differently.
This started with thorough user research led by the city. Over 250 interviews and 1,250 surveys were completed by Cambridge residents and representatives from the city, schools, and community-based organizations. Taking the time to survey and interview everyday residents ensured we could confidently build a truly helpful site.
From that research we learned that the site needed:
To make the research findings a reality we combined forces with Terravoz, a digital research and development agency, and Todd Linkner, a designer and front-end developer who defined Find It Cambridge’s brand identity and developed an accompanying style guide.
There are hundreds of events, programs and organizations in Cambridge. To find exactly what one is looking for a sophisticated filtering system is a must. We chose Apache Solr, leader of the pack when it comes to advanced filtering.
One particularly interesting facet came out of Cambridge’s unique geography. Despite spanning a relatively small area, Cambridge’s neighborhood boundaries are infamously creative. Even longtime residents don’t necessarily know where one neighborhood ends and another starts. So, while filtering by neighborhood is helpful, we decided a visual aid was in order.
Todd Linkner created a custom SVG image file representing Cambridge’s neighborhoods. We then took that SVG file and wrote a custom module that associates each neighborhood map section to a Drupal vocabulary term. The result is a clickable map filter aiding site visitors in quickly finding programs and activities in their area.
For a knowledge hub like Find It Cambridge to thrive, it needed buy in from service providers. Getting their input during the research phase set that relationship off on the right foot. The resounding feedback was that the site needed to be easy for them to use.
This proved to be a challenge because while ease of use was critical, it was also essential that events and programs have rich metadata. The more data we ask of users, the more complex interfaces become.
To address this we leveraged Drupal’s customizable dashboard and the Field Groups module.
By default, the first page a user sees when logging into a Drupal site is an underwhelming user profile page.
We customized a dashboard with the key actions providers take on the site: creating new content, updating past content and answering questions about the site.
While there is a Drupal Dashboard module, we opted to build this ourselves for maximum flexibility and control. Doing so allowed us to break information out into several working tabs. A custom administrative page for internal documentation pages and other Find It Cambridge information turns control over the “Have Questions?” section of the dashboard over to site administrators, rather than being hardcoded.
With dozens of service providers managing content on the site mistakes are bound to happen. The worst scenario is accidentally deleting a node. In Drupal when a node is deleted it is gone forever. To protect against these we used the Killfile module to “soft delete” nodes, allowing for their recovery if needed.
Another key piece to getting relevant, timely information added to the site is helping the Find It Cambridge team remind and support service providers to use the site and update their information. To that end we put together a statistics page listing organizations in alphabetical order, along with the number of programs and events they have. This allows the team to quickly spot duplicate entries and other incorrect data.
We also implemented a notification system. Any time a service provider adds or updates content the Find It team receives an email. This helps managers stay on top of the ever changing content of the site.
Since Find It Cambridge launched, 333 organizations have created accounts and contributed to the directory. Residents now have a single site they can refer to stay connected with events and access programs. The effort has also fostered increased collaboration across city departments and services.
Connecting community is an ongoing process and we continue to improve the site to better connect residents.
In 2020, we completely overhauled the site and built the program locator and event finder that powers FindItCambridge as software and as a platform available to all cities, towns, and regions to adopt.
We do recommend moving directly to Drupal 9 (which was released on June 3rd of 2020), however:
Moving to Drupal 8 or to Drupal 9 is much the same. Drupal 8 starts what i call the "modern Drupal" era. Whereas for going from Drupal 5 to 6 or 6 to 7 or 7 to 8 broke backward compatibility and might as well be a full rebuild (so we would often recommend hopping a version, say, stay on Drupal 6 and wait for Drupal 8 to be ready) going from Drupal 8 to 9 is closer to going from Drupal 8.8 to 8.9— an in-place upgrade from 8.9 to 9.0. Going from 9 to 10 will work the same, and that's the plan and promise for Drupal 8 on out.
All that said, if anything significant needs fixing on your current Drupal 7 site, or you are looking to make any improvements, you'll want to do that on Drupal 8+ or Drupal 8/9 as we phrased it back when Drupal 9 was still a pretty recent release, but now we can just say Drupal 9— or, as i call it to emphasize the decreased importance of major version numbers, modern Drupal.
Agaric is always happy to discuss more! Mostly what i'm saying here is the useful things to talk about are the specific goals for the sites—when you want to accomplish what—because the official support cycles are a distraction in the current context of Drupal. So make sure your current site is maintained, but take your time to get clear on your objectives, and contact Agaric or the Drupal professionals of your choice when you think it might make sense to upgrade your site into the era of modern Drupal.