Here is how to deal with the surprising-to-impossible-seeming error "Unable to uninstall the MySQL module because: The module 'MySQL' is providing the database driver 'mysql'.."
Like, why is it trying to uninstall anything when you are installing? Well, it is because you are installing with existing configuration— and your configuration is out-of-date. This same problem will happen on configuration import on a Drupal website, too. (See update below for those steps!)
Really this error message is a strong reminder to always run database updates and then commit any resulting configuration changes after updating Drupal core or module code.
And so the solution is to roll back the code to Drupal 9.3, do your installation from configuration, and then run the database updates, export configuration, and commit the result.
For example:
git checkout <commit-hash-of-earlier-composer-lock>
composer install
drush -y site:install drutopia --existing-config
git checkout main
composer install
drush -y updb
drush -y cex
git status # Review what is here; git add -p can also help
git add config/
git commit -m "Apply configuration updates from Drupal 9.4 upgrade"
The system update enable_provider_database_driver is the post-update hook that is doing the work here to "Enable the modules that are providing the listed database drivers." Pretty cool feature and a strong reminder to always, always run database updates and commit any configuration changes immediately after any code updates!
This is what you probably already did, before the drush -y cim failed (luckily, luckily it failed).
composer update
drush -y updb
All that is great! Now continue, not with a config import, but with a config export:
drush -y cex
git status # Review what is here; git add -p can also help
git add config/
git commit -m "Apply configuration updates from Drupal 9.4 upgrade"
Remember, after every composer update and database update, you need to do a configuration export and commit the results— database updates can change configuration, and if you do not commit those, you will undo these intentional and potentially important changes on a configuration import. If you ran into this problem on a configuration import, it is a sign of breakdown in discipline in following these steps!
Every time you bring in code changes with composer update all this must be part and parcel:
composer update
drush -y updb
drush -y cex
git status # Review what is here; git add -p can also help
git add config/
git commit -m "Apply configuration from database updates"
Photo courtesy of Women of Color in Tech stock images. CC Attribution License
Numquam eos enim voluptatem beatae doloribus. Ea provident dolor repellendus dolores adipisci laborum illo. Quas iusto vel architecto totam voluptas assumenda. Excepturi est inventore et architecto velit ratione.
Eum sed quisquam vel voluptatum enim nisi optio nobis. Deleniti corporis magnam dolore quia. Perferendis voluptatum dignissimos cum atque quasi quis est asperiores.
The entity_generate process plugin receives a value and checks if there is an entity with that name. If the term exists then it uses it and if it does not then creates it (which is precisely what I need).
So, here is a snippet of the article migration YAML file using the entity_generate plugin:
id: blog
migration_group: Drupal
label: Blog
source:
plugin: d7_node
node_type: blog
destination:
plugin: entity:node
process:
status: status
created: created
field_tags:
plugin: sub_process
source: field_tags
process:
target_id:
- plugin: entity_generate
source: name
value_key: name
bundle_key: vid
bundle: tags
entity_type: taxonomy_term
ignore_case: true
…
In our field_tags field we are using the Drupal 7 field_tags Values We are going to read the entities and pass that value into the entity_generate plugin to create the entities. In this example, there is a problem, the d7_node migrate plugin (included in the migrate module) provides the taxonomy terms IDs and this will make the entity_generate plugin to create some taxonomy terms using the IDs as the term names, and this is not what we want.
So what I need to do is to get from somewhere the terms names, not their ids, to do that I need to add an extra source property.
First, we need to create a new custom module and in there create a source plugin which extends the Node process plugin, something like this (let's say that our custom module’s name is my_migration):
Create the file:
my_migration/src/Plugin/migrate/source/MyNode.php
And the file should contain this code:
namespace Drupal\my_migration\Plugin\migrate\source;
use Drupal\migrate\Row;
use Drupal\node\Plugin\migrate\source\d7\Node;
/**
* Adds a source property with the taxonomy term names.
*
* @MigrateSource(
* id = “my_node",
* source_module = "node"
* )
*/
class MyNode extends Node {
public function prepareRow(Row $row) {
$nid = $row->getSourceProperty('nid');
// Get the taxonomy tags names.
$tags = $this->getFieldValues('node', 'field_tags', $nid);
$names = [];
foreach ($tags as $tag) {
$tids[] = $tag['tid'];
}
if (!$tids) {
$names = [];
}
else {
$query = $this->select('taxonomy_term_data', 't');
$query->condition('tid', $tids, 'IN');
$query->addField('t', 'name');
$result = $query->execute()->fetchCol();
$names[] = ['name' => $result['name']];
foreach ($result as $term_name) {
$names[] = ['name' => $term_name];
}
}
$row->setSourceProperty('field_tags_names', $names);
return parent::prepareRow($row);
}
}
It does the following things:
Now our rows will have a property called fields_tags_names with the terms names, and we can pass this data to the entity_generate plugin.
We need to make a few adjustments in our initial migration file, first and most important update the source plugin to use our new source plugin:
source:
plugin: my_node
…
And update the source in the field_tags field to use the new field_tags_names source property.
…
field_tags:
plugin: sub_process
source: field_tags_names
….
The final migration file looks like this:
id: blog
migration_group: Drupal
label: Blog
source:
plugin: my_node
node_type: blog
destination:
plugin: entity:node
process:
status: status
created: created
field_tags:
plugin: sub_process
source: field_tags_names
process:
target_id:
- plugin: entity_generate
source: name
value_key: name
bundle_key: vid
bundle: tags
entity_type: taxonomy_term
ignore_case: true
…
And that’s it; if we run the migration, it will create on the fly the terms if they do not exist and use them if they do exist.
CKEditor is well-known software with a big community behind it and it already has a ton of useful plugins ready to be used. It is the WYSIWYG text editor which ships with Drupal 8 core.
Unfortunately, the many plugins provided by the CKEditor community can't be used directly in the CKEditor that comes with Drupal 8. It is necessary to let Drupal know that we are going to add a new button to the CKEditor.
Drupal allows us to create different text formats, where depending on the role of the user (and so what text formats they have available) they can use different HTML tags in the content. Also, we can decide if the text format will use the CKEditor at all and, if it does, which buttons will be available for that text format.
That is why Drupal needs to know about any new button, so it can build the correct configuration per text format.
We are going to add the Media Embed plugin, which adds a button to our editor that opens a dialog where you can paste an embed code from YouTube, Vimeo, and other providers of online video hosting.
First of all, let's create a new module which will contain the code of this new button, so inside the /modules/contrib/ folder let's create a folder called wysiwyg_mediaembed. (If you're not intending to share your module, you should put it in /modules/custom/— but please share your modules, especially ones making CKEditor plugins available to Drupal!)
cd modules/contrib/ mkdir wysiwyg_mediaembed
And inside let's create the info file: wysiwyg_mediaembed.info.yml
name: CKEditor Media Embed Button (wysiwyg_mediaembed) type: module description: "Adds the Media Embed Button plugin to CKEditor." package: CKEditor core: '8.x' dependencies: - ckeditor
Adding this file will Drupal allows us to install the module, if you want to read more about how to create a custom module, you can read about it here.
Once we have our info file we just need to create a Drupal plugin which will give info to the CKEditor about this new plugin, we do that creating the following class:
touch src/Plugin/CkEditorPlugin/MediaEmbedButton.php
With this content:
namespace Drupal\wysiwyg_mediaembed\Plugin\CKEditorPlugin;
use Drupal\ckeditor\CKEditorPluginBase;
use Drupal\editor\Entity\Editor;
/**
* Defines the "wysiwyg_mediaembed" plugin.
*
* @CKEditorPlugin(
* id = "mediaembed",
* label = @Translation("CKEditor Media Embed Button")
* )
*/
class MediaEmbedButton extends CKEditorPluginBase {
/**
* Get path to library folder.
* The path where the library is, usually all the libraries are
* inside the '/libraries/' folder in the Drupal root.
*/
public function getLibraryPath() {
$path = '/libraries/mediaembed';
return $path;
}
/**
* {@inheritdoc}
* Which other plugins require our plugin, in our case none.
*/
public function getDependencies(Editor $editor) {
return [];
}
/**
* {@inheritdoc}
* The path where CKEditor will look for our plugin.
*/
public function getFile() {
return $this->getLibraryPath() . '/plugin.js';
}
/**
* {@inheritdoc}
*
* We can provide extra configuration if our plugin requires
* it, in our case we no need it.
*/
public function getConfig(Editor $editor) {
return [];
}
/**
* {@inheritdoc}
* Where Drupal will look for the image of the button.
*/
public function getButtons() {
$path = $this->getLibraryPath();
return [
'MediaEmbed' => [
'label' => $this->t('Media Embed'),
'image' => $path . '/icons/mediaembed.png',
],
];
}
}
The class's code is pretty straightforward: it is just a matter of letting Drupal know where the library is and where the button image is and that's it.
The rest is just download the library and put it in the correct place and activate the module. If all went ok we will see our new button in the Drupal Text Format Page (usually at: /admin/config/content/formats).
This module was ported because we needed it in a project, so if you want to know how this code looks all together, you can download the module from here.
Now that you know how to port a CKEditor plugin to Drupal 8 the next time you can save time using Drupal Console with the following command:
drupal generate:plugin:ckeditorbutton
What CKEditor plugin are you going to port?
2020
2019
2018
2017
Sign up to be notified when Agaric gives a migration training:
Updated 2025-11-14: NEDcamp2025 slides markdown source
You've built sites with Drupal and know that with a few dozen modules (and a ton of configuring), you can do nearly everything in modern Drupal.
But what do you do when there's not a module for that? When the modules that exist don't meet your needs, and cannot be made to by contributing changes?
You make your own.
This blog post accompanies the New England Drupal Camp session and helps you take that step. All you need to do is write two text files. The first file tells Drupal about the module; it is not code so much as information about your code. The second file can have as little as three lines of code in it. Making a module is something that anyone can do. Every person developing a module is still learning.
Slides source code: https://gitlab.com/agaric/presentations/when-not-a-module-for-that
(Site with slides coming.)
This is about the simplest module you can make, and—unlike the popular implications of its name—is not growing by adding bits and pieces of other modules as we build a monstrosity (that is what we will do later, as exhuming and incorporating adaptations of other modules' code is our main technique for making new modules).
Frankenstein (example) | Drupal.org
Put your module on Drupal.org: drupal.org/node/add/project-module
(I honestly do not know where they hide that link, but remember on any Drupal site you can go to /node/add and see what content you have access to create, and that includes creating projects on Drupal.org!)
It will start its life as a "sandbox" module by default.
Tip: Do not try to choose a part of core under "Ecosystem", and always check that the node ID it references is really one you want— there are a lot of projects with similar or even the same name.
Do not get overwhelmed; you can come back and edit this page later.
Press Save.
Then press the Version control tab to go to a page with Git instructions tailored to your module.
If you have already started your module and made commits in a repository dedicated to only your module, make certain your commits are on a branch named 1.0.x (the instruction included there git checkout -b 1.0.x will be enough to do that) and then follow only the git remote add and git push steps.
Double-checked to make certain the "Frankenstein" example module would not be better an example of when already "There's a module for that" and discovered the sad backstory— it exists in Drupal 7, people wanted it ported to modern Drupal, someone did port it to modern Drupal, but in a private GitHub repository and module maintainers did not take action and now that repo is gone.
(To do: Make into its own post.)
I apologize, on behalf of all of Drupal, that this is harder than it ought to be.
First, create an issue (if there is not one already) stating the needed feature or bug fix.
Before you start modifying the module's code, go into your project's composer.json and edit the version of the module you are using and want to contribute to and make it the dev version.
If you have gotten the module as ^3.0@RC for example, change that to ^3.0.x-dev@dev.
Run composer update.
Now, so long as you do not have a restriction in your composer.json, composer will get the git repository of that module for you and put it in the usual place for contributed modules, such as web/modules/contrib.
From the command line (terminal), change directory to the module you are modifying and have filed an issue against (for instance, cd web/modules/contrib/example).
Back on your issue, press the light green "Create issue fork" button and follow the instructions it adds to the issue to issue under the "Show commands" link, to the right of the issue fork.
The lines you want to copy paste are under the first "Add & fetch this issue fork’s repository" and the "Check out this branch" copy code sections in that expanded "Show commands" section.
Make your code changes and git add them. You can copy the entire line to git commit from the bottom of the issue. And finally, git push.
Now you need to make a merge request.
In the issue, press the link to your issue fork. If you do not see a blue button "Create merge request" at the top of this page, you need to log in. Pressing "Sign in" is likely to log you in automatically because you are already logged in to Drupal.org, but you have to press it.
Press the blue "Create merge request" button at the bottom of the form pressing the first blue "Create merge request" button took you to.
To keep using the improvements you contributed to the module while waiting for them to be reviewed and, with lots of luck, eventually merged in, make a couple more modifications to your project composer.json file. The information you need is the same as you put into the module's git configuration with the issue fork repository and branch checkout commands you copy and pasted above. We can get that information again most easily by opening up the .git/config file in the module (or theme) you have been contributing to, and use the URL under "origin" in your "repositories" section of your composer.json and the branch name at the bottom of the .git/config in the "require" section, like this:
"repositories": {
"drupal/example": {
"type": "git",
"url": "https://git.drupalcode.org/issue/example-123456.git"
}
},
"require": {
"drupal/example": "dev-123456-issue-fork-branch-name",
},
Very, very helpful to go along with that, add this to a ~/.gitconfig file (or the equivalent on your OS).
# Git push helper via https://www.jvt.me/posts/2019/03/20/git-rewrite-url-https-ssh/
[url "ssh://git@github.com/"]
pushInsteadOf = https://github.com/
[url "ssh://git@gitlab.com/"]
pushInsteadOf = https://gitlab.com/
[url "ssh://git@git.drupal.org/"]
First, join Drupal Slack if you need to; more info on that at drupal.org/slack.
Old-style hooks take the hook name ("hook_something_cool") and replace hook with the module machine name ("example") and so having a function example_something_cool— this was the old way of doing namespaces with a sort of honor system. You do not have to worry about that anymore except insofar as you can recognize this style of hook in existing code to use as examples when you are writing new, cool hooks that use actual PHP namespaces.
The main thing it leaves out is where you actually put the hook implementations: In your module's folder inside a src/Hook subdirectory. Each set of hooks that will be called at the same time or that . At the top of this file goes a namespace declaration:
namespace Drupal\example\Hook;
Not changing example (or whatever it is in the example you copy) to match your module's machine name, or forgetting or making a typo in this line, is the first thing to check for if your hook does not seem to be having any effect.
The listing of hooks to see ways you can, well, hook into Drupal is a bit lower down on that page.
Portside’s advantage is a large moderator team. They have 20 dedicated volunteers who scour the web for the best reporting happening on the left. The technical background of the team spans the spectrum. It was imperative that we develop an authoring experience that was simple for anyone to use, without sacrificing functionality.
Building off the improved authoring experience offered by modern Drupal (beginning with Drupal 8), we tested the pasting of content from common sites Portside republishes, ensuring the various markup coming in from other sources had sensible tag and style removal rules in Drupal's text formatting so that articles display nicely within Portside.
When using dated software, we become adept at workarounds. An infamous one for Portside moderators was embedding tweets. Their old system did not support Twitter embed codes. The workaround was to take screenshots of tweets and link the image to the original tweet. We used the Media Entity Twitter module in concert with Drupal Core’s Media module to enable editors to seamlessly embed tweets in their articles. Goodbye workarounds.
One thing that did work well on their Drupal 7 site was embedding YouTube and Vimeo videos. With the WYSIWYG Media Embed module they simply needed to paste the url into their article and it would display properly. Unfortunately there was no Drupal 8 version of this module. So we helped port the WYSIWYG Media Embed module from Drupal 7 to Drupal 8. Now authors can easily embed videos from YouTube and Vimeo.
CKEditor allows for the defining of custom styles that authors can choose to apply to text. This was helpful for authors who aren’t as versed in HTML. Authors can now style text with terminology meaningful to them, but that uses semantically correct HTML under the hood.
We added custom style options to the WYSIWYG editor so moderators can style content using terms familiar to them, but producing standards-compliant HTML and CSS.
To further save editors’ time, we looked for ways to automate tasks. We identified three areas: publishing posts at a set time, easy posting to social media platforms, automated posting to listservs.
Portside publishes their articles each day at 8pm Eastern. This gives authors a grace period to fix any issues with their article before it publishes. We programmed this into the site so that when an author creates an article they can either save it as a draft, or save it as an article ready for publishing. Each day at 8pm Eastern Time articles in the Draft state remain so, while Articles in the Ready state are published.
Portside subscribers can stay current with content in several ways:
We built an integration between the website and their listserv software, Listserv, so that when an article is published it is sent to the appropriate listservs. For Portside Snapshot, all articles published on the previous day is aggregated into an email template and sent out.
With automation there is always the risk of human error. We built in safeguards for foreseeable mistakes. Each article is held in a queue which awaits moderator approval before being sent out. Also, if an article is published with an issue, an author can fix the mistake and then resend the article to the listserv. The moderator can then dismiss the first article with the error and accept the follow up corrected article to be sent out.
Moderators can resend an article to the listserv queue when they've made an update.
We streamlined their workflow further by equipping authors to post articles to Facebook and Twitter during the authoring process.
We did this by contributing to the Drupal Social Initiative, a working group harmonizing social networking functionality in Drupal. By making it easy for authors to post content from their website to social media platforms we intend to combat the disturbing trend of more and more content living behind walled gardens, a threat to the Open Web. Offering seamless “post to” workflows allows authors to keep control of their content while easily promoting it to their followers on social media platforms.
The Social Post Facebook and Social Post Twitter add a configuration page for users to link their website account with their respective social media accounts. Content published by a user is then automatically posted to their respective social media accounts.
For Portside, we took this one step further by adding a text field on the article content entry form. This field allows authors to enter the text they wish to post accompanying a link to the content they are publishing.
When moderators include text in the social media fields the article is automatically posted to Facebook and/or Twitter.
Twenty editors posting to the site, site visitors suggesting articles to be posted, multiple articles in draft while others are ready but yet to be published— there is a constant flurry of activity on portside.org; enough to make your head spin. Editors needed a dashboard to easily keep track of everything.
We enhanced Drupal’s default administrative page for content to show the article’s custom workflow state and the Portside Date (the date an article is set to be published). To help editors focus just on articles yet to be published we created a Moderated Content page and for articles queued up to be published they now have a Scheduled page.
We customized Drupal's default administrative content page to show the moderation state each article is in.
A custom administrative page shows which articles have yet to be published, but are scheduled to be.
A backlink analysis showed the World Wide Web had thousands of inbound links to Portside's internal pages (that is, pages other than the home page). Breaking these links or redirecting them to unrelated content would have profoundly damaged portside.org's ranking in search engines.
Tens of thousand more links to articles on the Portside website undoubtedly live in semi-private social networks (including Facebook), organization group chats (including Slack instances), browser bookmarks, notes, direct message chat, and e-mails that we cannot know about. Furthermore, Portside has sent tens of thousands of e-mails to their tens of thousands of subscribers including many links to portside.org that needed to be preserved.
Both public web links and these 'offline' links (the source of much of the traffic classified as 'direct' or 'no referrer' in web analytics software) are providing value to the people who use them, and removing that value would negatively affect people who may be significant stakeholders (journalists, researchers, potential partners, and others).
For this reason, it was imperative we migrate all of Portside's content (and probably yours too). Our migration of Portside from Drupal 7 to modern Drupal (Drupal 8 at the time, now Drupal 10) involved migrating more than ten thousand articles— though once we work out the kinks in the first twenty or one hundred of a given kind of content, the next thousand or hundred thousand usually import fine, and that was the case here. (All right, some of Portside's oldest content called for special tweaks to our migration scripts to handle an older way moderators had embedded images and videos.)
It is often enough throughout the journey of building websites that you will desire to create very specific lists of content. In my current endeavor, I am working in Drupal 9 with 2 content types: Office location and Event . Each Event occurs at an Office location, which is indicated by an Office location entity reference field on the Event. Note that any info in this blog will also be relevant to Drupal 8, Drupal 10, and beyond.
Here is the user story that we will be working through:
As a site visitor, when I view an Office location node, I should see a list of Events that are occurring at the location represented by the current node, so that I have an easy way to learn about events at locations relevant to me.
To keep this short and sweet, I am going to assume that you have already created a view. However, all views are not created equal, so if you are not seeing the options that you are looking for while editing your view, maybe try creating a new one. We will be working with a block display—other options are possible, but this is the standard "Drupal approach".
Once you are editing a block display for your view, open the Advanced dropdown on the right side. Here is what you will see:

Increasingly, the entities and individuals who participate in such initiatives are geographically dispersed with limited opportunities for in-person or contemporaneous collaboration. They need spaces where they can work asynchronously—online communities that enable teams to share ideas and best practices, receive feedback from experts, and exchange resources. And each space or community needs to be their own. The required feature set is robust, but common—the perfect fit for a Drupal distribution. The NICHQ Collaboratory (“CoLab”) provided such spaces, but they were powered by a Drupal 7 distribution which made site-specific customization and ongoing maintenance fraught and unmanageable. Moreover, the list of improvements slated for implementation was increasing, and there was growing consensus that the information architecture for resources and other information needed rethinking.
Anecdotally, we knew the sites could use an upgrade, but there were also features users appreciated. In our upgrade, we did not want to disrupt any beloved processes in our quest for improving the experience.
We started with a survey to find the common pain points and the loved features to leave intact. We learned that overall people still loved the visual design, felt the quality of content was high, and moderation was strong. Most respondents were overwhelmed with the amount of content, despite its quality, and didn't find the notification system helpful.
The next step was to dig deeper to learn why exactly content was hard to find and the best way to solve this. The survey asked if the respondent would be interested in a follow-up interview so we had a good list to draw from.
We knew from our prior audience work that there were two primary user groups: Project Managers and Participants. We decided to start by interviewing one project manager, a new participant and a veteran participant. After this first round of interviews we would then decide if more research was needed.
There were some surprising findings. One is that each community had groups within it, for more specific topics. However, none of the interviewees used these groups. In fact, they got in the way of collaboration. For them, collaboration meant transcending groups and working with anyone within the team. We also learned that categorization had run a bit wild. Different users were tagging resources in different ways and there were terms that served the same purpose, creating redundancy and inconsistent tagging. This was both a process challenge and a technical one. NICHQ would get on the same page about how to categorize content and we would lock down the ability to add new categories, only to project managers.
We do not build websites page by page, but rather in components that get reused across a site. Atomic design is a way to conceive of components made up of smaller components, made up of even smaller components.
To ease development and design, we built our wireframes using the same concept. In Sketch these are called symbols.
It is tempting to punt all design questions until after wireframes are approved. However, sometimes it is too difficult for us to envision the functionality of something without more design polish added to it.
So, instead of pure, clean transitions we moved from wireframes to design knowing that some questions were still unresolved.
Luckily, our "atomic wireframes" were easy for our designer Todd Linkner to update.
The CoLab Feed is where collaborators stay informed of upcoming events and see the latest member activity.
As mentioned above, NICHQ's need for multiple sites using similar feature sets was the perfect fit for a Drupal distribution. However, we needed a workflow that could allow some sites to diverge from the original codebase but still bring in future improvements.
We had already started in on this endeavor with Drutopia, an initiative to improve the way Drupal manages configuration and distributions.
With the NICHQ CoLab as our practical use case, we worked with the Drupal shop Chocolate Lily to build and improve upon a suite of modules to manage and share different states of configuration across distributions.
For more, read this excellent series of posts by Nedjo Rogers.
Users can ask the community a question they have about children's health.
Here are all the links from the slide Micky told you not to try to write everything down from.
Agarics are members of a few networks and movements both local and global:
And some that didn't make the slides, that other Agarics are a part of:
The only prerequsite is having done some site building with Drupal, and so having familiarity with Drupal configuration and its limits. Information gained will be equally relevant to any version of modern Drupal, including 10, 11 and the coming Drupal 12.
This training is targeted to beginners, but as it is chock full of tips we expect people at intermediate and even advanced levels with Drupal to get significant benefit.
Making a module is something that anyone can do. A useful one may be only two files with a few lines each. There are many (mostly simple) rules to follow and tons of tools to use—and lots of exploration to do. Every person developing a module is still learning!
A working Drupal 11 installation using DDEV will be provided.
This training will be provided over Zoom. You can ask questions via text chat or audio. Sharing your screen is optional, but you might want to do it to get assistance on a specific issue. Sharing your camera is optional.

Attendees will receive detailed instructions on how to setup their development environment. In addition, they will be able to join a support video call days before the training event to make the the local development environment is ready. This prevents losing time fixing problems with environment set up during the training.

LibrePlanet is an annual conference hosted by the Free Software Foundation for free software enthusiasts and anyone who cares about the intersection of technology and social justice. We've attended and spoken at LibrePlanet many times over the year. This year's theme is "Trailblazing Free Software" and in that spirit Micky is speaking on the Orwellian future that has arrived and what tech justice movements we should be supporting and joining to fight for a freedom-loving, solidarity-based future.
LibrePlanet Keynote: How can we prevent the Orwellian 1984 digital world?
Sunday, March 24th
5:15pm-6:00pm
Stata Center, Massachusetts Institute of Technology Room 32-123
Cambridge, MA
We are living in a society where -- as mere individuals -- it seems out of our control and in the hands of those who have the power to publish and distribute information swiftly and widely, or who can refuse to publish or distribute information. Algorithms now sort us into Global databases like PRISM or ECHELON, and there are devices such as StingRay cell phone trackers used to categorize our every movement. We may build our own profiles online, but we do not have access to the meta-profile built by the corporate entities that our queries traverse as we navigate online, purchasing goods and services as well as logging into sites where we have accounts. The level of intrusion into our most private thoughts should be alarming, yet most fail to heed the call as they feel small, alone, and unable to defy the scrutiny of disapproval from the powers that govern societal norms and their peers. Together, we can change this.
Micky will engage your mind on a journey to open an ongoing discussion to rediscover and reawaken your own creative thought processes. Together, we build a conversation that should never end as it will join us together transparently maintaining our freedoms, with free software as the foundation. Where do we find our personal power, and how do we use it as developers? Do we have a collective goal? Have you checked your social credit rating lately? Others have.