Drupal Planet

Dripyard Premium Drupal Themes: Dripyard on the Pantheon Livestream

I had a great time chatting with Pantheon’s Chris Reynolds earlier today about what is coming next for Drupal. We dug into Drupal’s upcoming marketplace and where Dripyard fits into that ecosystem.

We also talked through site templates, recipes, single directory components, and how all of this is shaping the future of Drupal.

If you are curious about where Drupal is headed and how the commercial ecosystem is evolving, this one is worth a watch!

Drupal Association blog: Join Drupal for a week of innovation at EU Open Source Week 2026

The Drupal Community will have a large showing at EU Open Source Week 2026 in Brussels.  You are invited to join Drupal Association board members Baddy Sonja Breidert, Tiffany Farriss, Sachiko Muto, Imre Gmelig, and Alex Moreno at the following events throughout the week. Drupal Association CEO Tim Doyle, CTO Tim Lehnen, and Head of Global Programs, Meghan Harrell will also be in attendance.

Happening from 26 January to 1 February, 2026 in Brussels, the global open source community is gearing up for the EU Open Source Week. We are proud to highlight the significant presence of the Drupal project throughout the week.

Here’s where you can find Drupal making an impact:

Play to impact - Drupal AI Hackathon: The event will kick-off online on 22 January 2026 and further will be continued during the EU Open Source Week on 27 and 28 January 2026. During the two-day event, developers, designers and other digital innovators will work side by side to create smarter, faster and more open digital solutions built with Drupal and AI.

Learn more and register here.

Drupal Pivot: A 1.5-day, peer-led un-conference, to be held on 27 and 28 January 2026, for Drupal agency CEOs, founders, and senior executives to collaboratively explore the most pressing strategic questions shaping the future of the Drupal business ecosystem.

Learn more and register here.

Drupal EU Government Day: A unique free one-day event, scheduled for 29 January 2026, bringing together policymakers, technologists, and digital leaders from across Europe’s public sector.

Learn more and register here.

EU Open Source Policy Summit:  An invite-only one-day event with free online access, hosted by OpenForum Europe (OFE), bringing together leaders from the public and private sectors to focus on digital sovereignty.

The Drupal Association is honoured to support the EU Open Source Policy Summit 2026 as a Bronze sponsor

In addition to our sponsorship, we are pleased to highlight two members of our Board of Directors who will be sharing insights during the program:

  • Baddy Sonja Breidert will join the panel: Europe as the World’s Home for Open Source
  • Sachiko Muto will provide an intervention on: UN Open Source Week 2026 and the UN Open Source Principles

Learn more and register here.

FOSDEM: A two-day volunteer-run free event for the global open source community. Held every year at the end of January in Brussels, it is a massive event of contributors to share code, host community-led "devrooms," and collaborate face-to-face without registration fees or corporate barriers.

  • HackerTrain to FOSDEM: An event that’s meant to coordinate several train routes to Brussels to bring hackers from all over Europe to FOSDEM and the EU Open Source Week. This means that there will not be one HackerTrain, but several at the same time on different routes and different dates – all converging in Brussels.

Learn more and register here.

EU OpenSource Week is going to be an immersive experience for developers, policymakers, and industry leaders to shape the future of open technology offering unparalleled opportunities to: 

  • Engage directly with EU policymakers, 
  • Learn from and collaborate with leading developers,
  • Network and connect with open source communities,
  • Explore new partnerships

Explore other events happening during the EU Open Source Week on their official website.

Dries Buytaert: Funding Open Source for Digital Sovereignty

As global tensions rise, governments are waking up to the fact that they've lost digital sovereignty. They depend on foreign companies that can change terms, cut off access, or be weaponized against them. A decision in Washington can disable services in Brussels overnight.

Last year, the International Criminal Court ditched Microsoft 365 after a dispute over access to the chief prosecutor's email. Denmark's Ministry of Digitalisation is moving to LibreOffice. And Germany's state of Schleswig-Holstein is migrating 30,000 workstations off Microsoft.

Reclaiming digital sovereignty doesn't require building the European equivalent of Microsoft or Google. That approach hasn't worked in the past, and there is no time to make it work now. Fortunately, Europe has something else: some of the world's strongest Open Source communities, regulatory reach, and public sector scale.

Open Source is the most credible path to digital sovereignty. It's the only software you can run without permission. You can audit, host, modify, and migrate it yourself. No vendor, no government, and no sanctions regime can ever take it away.

But there is a catch. When governments buy Open Source services, the money rarely reaches the people who actually build and maintain it. Procurement rules favor large system integrators, not the maintainers of the software itself. As a result, public money flows to companies that package and resell Open Source, not to the ones who do the hard work of writing and sustaining it.

I've watched this pattern repeat for over two decades in Drupal, the Open Source project I started and that is now widely used across European governments. A small web agency spends months building a new feature. They design it, implement it, and shepherd it through review until it's merged.

Then the government puts out a tender for a new website, and that feature is a critical requirement. A much larger company, with no involvement in Drupal, submits a polished proposal. They have the references, the sales team, and the compliance certifications. They win the contract. The feature exists because the small agency built it. But apart from new maintenance obligations, the original authors get nothing in return.

Public money flows around Open Source instead of into it.

Multiply that by every Open Source project in Europe's software stack, and you start to see both the scale of the problem and the scale of the opportunity.

This is the pattern we need to break. Governments should be contracting with maintainers, not middlemen.

Public money flows around Open Source instead of into it. Governments should contract with maintainers and builders, not middlemen.

Skipping the maintainers is not just unfair, it is bad governance. Vendors who do not contribute upstream can still deliver projects, but they are much less effective at fixing problems at the source or shaping the software's future. You end up spending public money on short-term integration, while underinvesting in the long-term quality, security, and resilience of the software you depend on.

If Europe wants digital sovereignty and real innovation, procurement must invest in upstream maintainers where security, resilience, and new capabilities are actually built. Open Source is public infrastructure. It's time we funded it that way.

The fix is straightforward: make contribution count in procurement scoring. When evaluating vendors, ask what they put back into the Open Source projects they are selling. Code, documentation, security fixes, funding.

Of course, all vendors will claim they contribute. I've seen companies claim credit for work they barely touched, or count contributions from employees who left years ago.

So how does a procurement officer tell who is real? By letting Open Source projects vouch for contributors directly. Projects know who does the work. We built Drupal's credit system to solve for exactly this. It's not perfect, but it's transparent. And transparency is hard to fake.

We use the credit system to maintain a public directory of companies that provide Drupal services, ranked by their contributions to the project. It shows, at a glance, which companies actually help build and maintain Drupal. If a vendor isn't on that list, they're most likely not contributing in a meaningful way. For a procurement officer, this turns a hard governance problem into a simple check: you can literally see which service providers help build Drupal. This is what contribution-based procurement looks like when it's made practical.

Fortunately, the momentum is building. APELL, an association of European Open Source companies, has proposed making contribution a procurement criterion. EuroStack, a coalition of 260+ companies, is lobbying for a "Buy Open Source Act". The European Commission has embraced an Open Source roadmap with procurement recommendations.

Europe does not need to build the next hyperscaler. It needs to shift procurement toward Open Source builders and maintainers. If Europe gets this right, it will mean better software, stronger local vendors, and public money that actually builds public code. Not to mention the autonomy that comes with it.

I submitted this post as feedback to the European Commission's call for evidence on Towards European Open Digital Ecosystems. If you work in Open Source, consider adding your voice. The feedback period ends February 3, 2026.

Special thanks to Taco Potze, Sachiko Muto, and Gábor Hojtsy for their review and contributions to this blog post.

CKEditor: What’s new in CKEditor Drupal modules: Footnotes, Restricted Editing and more

New versions of the CKEditor contributed modules bring new formatting and content control features to Drupal. Premium Features module 1.7.0 introduces Footnotes and Line Height support, expanding the toolkit for structured writing and print-optimized formatting. The release also includes support for watermarks in Word exports and several fixes to improve authoring experiences and compatibility. Plugin Pack module 1.5.0 delivers the long-awaited Restricted Editing feature for locked-down templates and controlled collaboration, along with configuration improvements to Layout Tables and general stability fixes.

UI Suite Initiative website: Announcement - Display Builder beta 1 has been released

Monday January the 12th 2026, the UI Suite team has proudly released the first beta of the Display Builder module, a display building tool for ambitious site builders:A single display building tool deeply integrated with DrupalIts powerful unified UI can be used instead of Layout Builder for entity view displays, Block Layout for page displays, and as a replacement of the Views' display building feature. No more struggle with overcomplicated inconsistent UIs!

Tag1 Insights: Speed up your testing with Deuteros

Take Away:

At Tag1, we believe in proving AI within our own work before recommending it to clients. This post is part of our AI Applied content series, where team members share real stories of how they're using Artificial Intelligence and the insights and lessons they learn along the way. Here, Francesco Placella, Senior Architect | Technical Lead, explores how AI supported his creation of Deuteros, an open‑source PHP library that makes Drupal’s entity system far easier to unit test—speeding up test runs by orders of magnitude and now freely available for anyone to use and contribute to.

Speed up your testing with Deuteros

At Tag1 we like performance.

We also like not being yelled at because a recently released production build introduced a major regression and suddenly the dev team has to scramble to hot-fix the issue.

Our industry has settled for quite a while now on a solid strategy to avoid that scenario: automated testing. It’s not a silver bullet but, when done right, it can hugely increase confidence in making changes to large and/or complex codebases, where even minimal tweaks can sometimes trigger very hard to predict side-effects.

Doing it Right

Automated testing is a vast and complex subject, with some grey areas, especially around terminology; however, there’s a key principle it’s a good idea to stick to: always run your automated tests after any change.

If you are embracing Test Driven Development, you’ll have written new tests covering the changes you are working on before implementing them: this will require running tests multiple times before the task is over. However, even if you are following a more traditional approach, writing test coverage after implementing changes, you will still end up running tests very often.

Admittedly, in the cases above we are talking about a very limited set of tests; however, if tests run slowly, even those will add a significant overhead to development time.

Another widely accepted best practice is to run the entire test suite before merging a change into the project’s repository. This both ensures that the new code behaves as expected and that no regression was introduced in the process. Once again, if tests run slowly, it may take a long time to get feedback, adding further overhead due to context switching.

Aside from the impact on developer experience, which at Tag1 is taken very seriously, the larger the overhead, the harder it is to justify to clients the amount of time spent implementing a change. I guess I don’t need to describe the cascading effects.

The Test Pyramid

An approach that’s very effective in addressing test speed issues is the test pyramid: without going too much into detail, the idea is that a test suite should be organized in a pyramidal shape with multiple layers getting smaller and slower to run as they approach the top.

This commonly translates into having:

  • A large amount of fast-running unit tests, covering individual aspects of the codebase.

  • A medium amount of integration tests combining multiple units into a consistent (sub)system and testing its behavior.

  • A small amount of end-to-end (E2E) tests pulling everything together and testing the business-critical behaviors of the entire codebase.

    Figure 1: The Test Pyramid

What about Drupal?

At Tag1 we do many things, but Drupal has a special place in our hearts. However, running Drupal tests quickly has often been challenging.

Support for automated tests was initially added to Drupal 7 and, as awesome as it was at the time, it involved running a full site installation in the test setup phase before being able to perform a single assertion. The Simpletest framework focused mainly on UI testing and, even if it was sort of possible to use it to write unit tests, it was like driving to your favorite movie theater during rush hour just to check whether the car battery was wired properly. As you can imagine, it took time.

In Drupal 8 and above the situation vastly improved with the adoption of the PhpUnit testing framework. Multiple types of tests were introduced:

  • Browser tests (and their companion WebDriver tests) are the successors to the original Simpletests and are primarily meant to perform UI testing. These are a very good fit to implement E2E tests, as they cover the full request/response scope, albeit being slow, as they still require an installed site and a full Drupal bootstrap.

It should be noted that the introduction by Moshe Weitzman of Drupal Test Traits helped to speed up these tests, as it allows them to run with an existing database, instead of installing an empty site from scratch every time.

  • Kernel tests run much faster, as they only require installing the subsystems needed to run the test code. These are ideal for Integration testing, as they allow you to pick and choose the parts of Drupal available to the test.

  • Unit tests, lightning-fast and narrow-scoped, are what should be used to implement the large majority of the test coverage. By default, they do close to nothing in the setup phase, allowing to test individual codebase units in isolation, by providing test doubles of all the objects the test subject interacts with.

So we are good now, right? Well, not exactly. The entity subsystem – which powers users, nodes, media items, and taxonomy terms among others – has historically been problematic when it comes to unit testing, because instantiating an entity object involves instantiating a significant amount of dependencies. In most cases this meant that, to write an entity unit test, a Kernel test with all its baggage was needed. Since entities are very prevalent in any Drupal codebase, this meant that a wide range of test scenarios could not rely on fast unit tests.

Meet Deuteros

This issue has been bugging me for a long time now, so I decided to experiment with an AI-assisted solution since long and tedious tasks is something LLMs are supposed to be good at.

The result is Deuteros – Drupal Entity Unit Test Extensible Replacement Object Scaffolding – a PHP library allowing the use of fast unit tests (extending UnitTestCase, for those in the know) when interacting with entities. And, yes, Claude helped me come up with the name as well.

Deuteros is fully open source and available today. You can explore the project, file issues, or contribute improvements on GitHub. The repository includes examples, docs, and instructions to start using it right away.

This is achieved by:

  • Allowing the creation of test doubles implementing various entity and field interfaces. These can be passed around in test code as if they were real entity objects.
use Deuteros\Double\EntityDoubleFactory; use Deuteros\Double\EntityDoubleDefinitionBuilder; class MyServiceTest extends TestCase { public function testMyService(): void { // Get a factory (auto-detects PHPUnit or Prophecy) $factory = EntityDoubleFactory::fromTest($this); // Create an entity double $entity = $factory->create( EntityDoubleDefinitionBuilder::create('node') ->bundle('article') ->id(42) ->label('Test Article') ->field('field_body', 'Article content here') ->build() ); // Use it in your test $myService = new MyService(); $myService->doStuff($entity); // These assertions would all pass $this->assertInstanceOf(EntityInterface::class, $entity); $this->assertSame('node', $entity->getEntityTypeId()); $this->assertSame('article', $entity->bundle()); $this->assertSame(42, $entity->id()); $this->assertSame('Article content here', $entity->get('field_body')->value); } }
  • Allowing the instantiation of real entity objects while providing test doubles for dependencies such as field items and Entity Field API services. This allows unit testing of custom entity logic such as the one typically added to bundle classes.
class MyNodeTest extends TestCase { private SubjectEntityFactory $factory; protected function setUp(): void { $this->factory = SubjectEntityFactory::fromTest($this); $this->factory->installContainer(); } protected function tearDown(): void { $this->factory->uninstallContainer(); } public function testNodeCreation(): void { $node = $this->factory->create(Node::class, [ 'nid' => 42, 'type' => 'article', 'title' => 'Test Article', 'body' => ['value' => 'Body text', 'format' => 'basic_html'], 'status' => 1, ]); // Real Node instance with DEUTEROS field doubles. $this->assertInstanceOf(Node::class, $node); $this->assertSame('node', $node->getEntityTypeId()); $this->assertSame('article', $node->bundle()); $this->assertSame('Test Article', $node->get('title')->value); $this->assertSame('Body text', $node->get('body')->value); } } How AI Enabled a New, Faster Class of Drupal Tests

This sounds all good but does it actually work? Well, the library is already being adopted by some Tag1 client projects and it seems to be working nicely. To get an idea of the speed gains Deuteros allows, we implemented some benchmark tests: a simple test trait performing a few node manipulation operations was used to measure the performance of multiple test types: in my DDEV local environment, 1000 iterations of the test complete in less than 1 second with Unit tests while they take ~12 minutes with Kernel tests – details and replication steps available here. Of course, running tests in parallel could likely reduce the gap by a factor of ten, but I’ll still take the former, thanks.

What’s Next?

Complex projects very often require patches to implement new functionality not yet available in the Drupal codebase, or apply bug fixes before they are eventually released. Applying patches for all intents and purposes means running a fork of the original project, which means there is no guarantee that, even if individual patches don’t break tests, their combined behavior won’t introduce regressions. Ideally, to be fully confident about the codebase quality, one would need to run the entire Drupal core + contrib test suite and confirm that nothing breaks. Currently, even on the drupal.org infrastructure – which is fully optimized for the job – on average you need to wait for ~15 minutes to get test results: adding the test suites of a hundred modules increases that by a lot. Running this on every pull request or commit push does not seem viable at the moment.

Deuteros was written as a standalone PHP library, so that it could easily be adopted without having to make significant changes to the host project. If it were widely adopted both in Drupal core and in contrib-land, running all unit test suites could probably happen in less than one minute, increasing the confidence in our changes by a lot, and possibly reducing our energy footprint as a bonus. Should we try?

Try faster, easier testing in your Drupal projects with Deuteros. It’s fully open source and ready to use today, explore the code, run the examples, or contribute your own ideas. Get started on GitHub >

Image by Google DeepMind from pexels.

Dries Buytaert: Software as clay on the wheel

A few weeks ago, Simon Willison started a coding agent, went to decorate a Christmas tree with his family, watched a movie, and came back to a working HTML5 parser.

That sounds like a party trick. It isn't.

It worked because the result was easy to check. The parser tests either pass or they don't. The type checker either accepts the code or it doesn't. In that kind of environment, the work can keep moving without much supervision.

Geoffrey Huntley's Ralph Wiggum loop is probably the cleanest expression of this idea I've seen, and it's becoming more popular quickly. In his demonstration video, he describes creating specifications through conversation with an AI agent, then letting the loop run. Each iteration starts fresh: the agent reads the specification, picks the most important remaining task, implements it, runs the tests. If they pass, it commits and exits. The next iteration begins with empty context, reads the current state from disk, and picks up where the previous run left off.

If you think about it, that's what human prompting already looks like: prompt, wait, review, prompt again. You're shaping the code or text the way a potter shapes clay: push a little, spin the wheel, look, push again. The Ralph loop just automates the spinning, which makes much more ambitious tasks practical.

The difference is how state is handled. When you work this way by hand, the whole conversation comes along for the ride. In the Ralph loop, it doesn't. Each iteration starts clean.

Why? Because carrying everything with you all the time is a great way to stop getting anywhere. If you're going to work on a problem for hundreds of iterations, things start to pile up. As tokens accumulate, the signal can get lost in noise. By flushing context between iterations and storing state in files, each run can start clean.

Simon Willison's port of an HTML5 parsing library from Python to JavaScript showed the principle at larger scale. Using GPT-5.2 through Codex CLI with the --yolo flag for uninterrupted execution, he gave a handful of directional prompts: API design, milestones, CI setup. Then he let it run while he decorated a Christmas tree with his family and watched a movie.

Four and a half hours later, the agent had produced a working HTML5 parser. It passed over 9,200 tests from the html5lib-tests suite. HTML5 parsing is notoriously complex. The specification precisely defines how even malformed markup should be handled, with thousands of edge cases accumulated over years. But the agent had constant grounding: each test run pulled it back to reality before errors could compound.

As Willison put it: "If you can reduce a problem to a robust test suite you can set a coding agent loop loose on it with a high degree of confidence that it will eventually succeed". Ralph loops and Willison's approach differ in structure, but both depend on tests as the source of truth.

Cursor's research on scaling agents confirms this is starting to work at enterprise scale. Their team explored what happens when hundreds of agents work concurrently on a single codebase for weeks. In one experiment, they built a web browser from scratch. Over a million lines of code across a thousand files, generated in a week. And the browser worked.

That doesn't mean it's secure, fast, or something you'd ship. It means it met the criteria they gave it. If you decide to check for security or performance, it will work toward that as well. But the pattern is the same: clear tests, constant verification, agents that know when they're done.

From solo loops to hundreds of agents running in parallel, the same pattern keeps emerging. It feels like something fundamental is crystallizing: autonomous AI is starting to work well when you can accurately define success upfront.

Willison's success criteria were "simple": all 9,200 tests pass. That is a lot of tests, but the agent got there. Clear criteria made autonomy possible.

As I argued in AI flattens interfaces and deepens foundations, this changes where humans add value:

Humans are moving to where they set direction at the start and refine results at the end. AI handles everything in between.

The title of this post comes from Geoffrey Huntley. He describes software as clay on the pottery wheel, and once you've worked this way, it's hard to think about it any other way. As Huntley wrote: "If something isn't right, you throw it back on the wheel and keep going". That's exactly how it feels. Throw it back, refine it, spin again until it's right.

Of course, the Ralph Wiggum loop has limits. It works well when verification is unambiguous. A unit test returns pass or fail. But not all problems come with clear tests. And writing tests can be a lot of work.

For example, I've been thinking about how such loops could work for Drupal, where non-technical users build pages. "Make this page more on-brand" isn't a test you can run.

Or maybe it is? An AI agent could evaluate a page against brand guidelines and return pass or fail. It could check reading level and even do some basic accessibility tests. The verifier doesn't have to be a traditional test suite. It just has to provide clear feedback.

All of this just exposes something we already intuitively understand: defining success is hard. Really hard. When people build pages manually, they often iterate until it "feels right". They know what they want when they see it, but can't always articulate it upfront. Or they hire experts who carry that judgment from years of experience. This is the part of the work that's hardest to automate. The craft is moving upstream, from implementation to specification and validation.

The question for any task is becoming: can you tell, reliably, whether the result is getting better or worse? Where you can, the loop takes over. Where you can't, your judgment still matters.

The boundary keeps moving fast. A year ago, I was wrestling with local LLMs to generate good alt-text for images. Today, AI agents build working HTML5 parsers while you watch a movie. It's hard not to find that a little absurd. And hard not to be excited.

Drupal AI Initiative: The Future of AI-Powered Web Creation Is People-First, Not-Prompt First

Aidan Foster - Strategy Lead, Foster Interactive

The Big Shift in Web Creation

For years, the hardest part of building a website was technical execution. Slow development cycles, code barriers, and long timelines created bottlenecks.

AI has changed this.

Execution is no longer the limiting factor. Understanding is. The new challenge is knowing your audience, clarifying your message, and structuring the story your website needs to tell.

The future is not prompt-first. It is people first. Strategy, insight, empathy, and structure.

This was the core message of my talk AI Page Building with Drupal Canvas. It is also why Foster Interactive joined the Drupal AI Makers initiative.

But none of this works unless the human layer comes first.

Why People-First AI Matters and Why AI Slop Happens
In a market characterized by AI slop, human insight is the key to being both authentic and distinct.

AI is a powerful assistant, but it cannot replace human judgment.
Large language models can synthesize patterns, but they cannot invent your strategy.

When teams skip the foundational work such as audience research, messaging clarity, and brand systems, AI produces generic output that feels shallow and off-brand.

This is what we call AI slop. The issue is not the model. The issue is unclear inputs.

AI can only accelerate the parts you already understand. The human layer must come first. Audience insight. Value propositions. Tone and language rules. Page-level content strategy.

Without this structure, every output becomes guesswork.

The New Tools: Canvas and the Context Control Center

Drupal’s new AI features are powerful because they finally support how marketers work.

Canvas: The visual editor built for marketers

Canvas allows anyone to build pages using drag and drop.
It offers instant previews, mobile and desktop views, simple undo and redo, and AI built directly into the editor.

You can ask Canvas to assemble a campaign landing page and it uses your brand components, design system, content rules, and tone to create useful starting points.

This is the most marketer-friendly Drupal experience ever made.

The Context Control Center: The AI knowledge base

This is where strategy becomes usable by AI. It allows teams to load audience personas, value propositions, tone guides, brand rules, page templates, messaging frameworks, and content strategy documents.

With this context available, the AI produces work that is aligned, accurate, and consistent.

Instead of guessing, it draws from your organization’s strategic foundation.
For the first time, brand and audience knowledge can be reused across the entire website.

The Demo: How We Built FinDrop Landing Page Demo

To demonstrate what is possible, we built a fictional SaaS company called FinDrop.

We created product stories, value props, audience personas, PPC ads, content strategy, and a visual system that matched the Mercury design system.

We generated all of this using strategy first, then AI. We crafted brand rules, used Nano Banana for consistent imagery, built campaign assets, and generated full landing pages for three stages of a funnel.

AI gave us speed, but only because the human structure was already in place. Without strategy the output collapsed. With structure it accelerated.
 

The Real Lesson: AI Only Works When the Inputs Are Strong
Demo of the Drupal Canvas UI with Mercury component library. 

The FinDrop demo made something clear. AI did not save time because it is smart. It saved time because the rules were defined. Your success depends on the strength of your foundations.

Clear value propositions. Real audience insight. A defined tone. Predictable page patterns. Brand rules the AI can follow. Without this, AI slows teams down.

At Foster Interactive we are testing the best models for Drupal workflows, refining content strategy structures for the Context Control Center, creating systems to make AI-ready brands easier to build, and bringing the marketer’s perspective into the AI Makers roadmap.

Our goal is simple. Make AI genuinely useful for small marketing teams without sacrificing accuracy or authenticity.

What Is Coming Next for Drupal AI and Why It Matters

Drupal CMS 2 is coming in early 2026. It will include deeper Canvas integration, more intuitive site templates, a lighter AI suite, reusable design systems, expanded knowledge base support, and better tools for auditing and maintaining content.

But the biggest change is this. It will become easy to install the tools and it will be obvious who has done the strategic work. Teams relying solely on AI will blend into the noise.

Teams grounded in human insight will stand out.

Now Is the Time to Unlearn What Is Possible

A few months ago, I did not believe a CMS could generate usable landing pages in minutes or create consistent AI imagery. Then we built FinDrop.

The tools have changed. The pace has changed.

Human insight cannot be outsourced to AI.

We want our AI tools to take care of boring, repetitive jobs to free up our time for creative and strategic work.

The role of marketers is shifting away from production bottlenecks and toward clarity, empathy, positioning, narrative, and audience understanding.

AI can accelerate execution and remove repetitive tasks. But it cannot replace the strategy behind them.

If we get the human foundations right, we create a future where imagination becomes the bottleneck, not time.

That's the future I want to live in.

Ready to put people-first AI into practice?

Start with your foundations. Sit down with your team and audit your brand guidelines. Talk to front-line support and sales - the people closest to your customers. Update your tone, messaging, and audience details. This is the work that makes AI useful.

Then try Canvas. Once your foundations are solid, test what's possible with the upcoming Drupal CMS 2.0 demo at drupalforge.org. (Or if you’re a little more technical, test the Driesnote Demo which is available right now).

Droptica: AGENTS.md Tool: How AI Actually Speeds Up Drupal Work

Friday, 2:00 PM. New developer, production bug. Something's broken with a custom queue worker. In the past, this meant tracking down the previous developer, consultations, wasting time – all on a Friday. Now? The developer asks artificial intelligence and AI responds with useful answers because it knows the project. How? Just one file: AGENTS.md.

Undpaul.de: Converting PHP class annotations to PHP attributes

For years, Drupal relied on the doctrine/annotations library to add metadata to plugin classes. This approach works through docblock comments. The /** ... */ blocks that appear above class definitions. While convenient, this method has several inherent limitations and will be gradually replaced by PHP attributes. 

DDEV Blog: DDEV January 2026: Year in Review, Looking Ahead, and Community Momentum

This is the time of year that many of us look backward and forward to inform our future, and DDEV is no different. We took the time to review last year and put together plans for 2026.

  • DDEV 2025 Review → The numbers tell an impressive story: growth in adoption, features, and community engagement. Read more↗
  • DDEV 2026 Plans → Our roadmap for sustainability, new features, and expanding the community. Read more↗
  • Board and Advisory Group Meeting was last week, and you can read the summary and watch the recording. See it all↗
Lots of new DDEV GUI experiments!

Many DDEV users love the PhpStorm/IntelliJ DDEV Integration Plugin maintained by AkibaAT and the VS Code DDEV Manager by Biati Digital, but there are new experimental entries in the GUI manager space:

  • DevWorkspacePro by damms005 is a commercial entry in the wrapper-GUI space that has had lots of energy applied to it. There isn't currently a free trial available, and it's a bit pricey, but see it at https://devworkspacepro.com/.
  • DDEVBar for macOS → Klemens Starybrat built a native Mac menu bar app that makes managing DDEV projects simpler. Check it out↗
  • DDEV Manager GUI → VonLoxx created another DDEV wrapper GUI. Try it out↗
  • DDEV GUI by ChaosKing (only tested on Linux) is another entry in this category. Experiment and give feedback↗
Community Tutorials from Around the World
  • Navigating Local Development Landscapes → Tag1 explores modern approaches to local development environments, featuring DDEV. Read on Tag1↗
  • Configurando um Ambiente de Desenvolvimento WordPress com DDEV (Docker) → Willian Rodrigues walks through configuring a fast, standardized WordPress development environment (in Portuguese). Read on Dev.to↗
  • DDEV and Gotenberg: Using Custom Fonts for Your PDFs → Jean-David Daviet explains how to integrate Gotenberg with DDEV for PDF generation with custom fonts (in English and French). Read the tutorial↗
  • Custom Drupal 11 Theme with TailwindCSS 4.1 → Serhii Shevchyk demonstrates setting up a modern Drupal 11 theme using TailwindCSS and DDEV. Read on Medium↗
Community Video Tutorials
  • CraftQuest: DDEV + Vite + Tailwind Boilerplate from Scratch → Learn to build a modern development boilerplate combining DDEV, Vite, and Tailwind CSS. Watch on CraftQuest↗
  • My CraftQuest DDEV Setup → Prolific trainer Ryan Irelan shows his setup for Craft CMS Watch↗
  • An introduction to Ansible and using ce-deploy in DDEV → Greg Harvey demonstrates deployment workflows using Ansible and ce-deploy with DDEV. Watch on YouTube↗
Help Us Out: Use HEAD

Want to help DDEV development? Consider testing with the HEAD version of DDEV. This helps us catch issues early. We expect that some advanced users with complex Dockerfiles may have hiccups with upcoming v1.25.0. It's easy, See the docs↗ for instructions.

DDEV Training Continues

Join us for upcoming training sessions for contributors and users.

Zoom Info: Link: Join Zoom Meeting Passcode: 12345

Sponsorship Update: You Showed Up

When we shared the news about Upsun lowering their sponsorship, the community responded. In just one month, sponsorship jumped from 58% to 68% of our goal—a 10% increase that makes a real difference. Thank you to everyone who stepped up and to those of you who are about to jump on board.

Previous status (December 2025): ~$6,964/month (58% of goal)

January 19, 2026: ~$8,208/month (68% of goal)

If DDEV has helped your team, now is the time to give back. Whether you're an individual developer, an agency, or an organization — your contribution makes a difference. → Become a sponsor↗

Contact us to discuss sponsorship options that work for your organization.

Stay in the Loop—Follow Us and Join the Conversation

Compiled and edited with assistance from Claude Code.

Aten Design Group: Simplified Drupal Views Styling with Custom Style Plugins

Simplified Drupal Views Styling with Custom Style Plugins jenna Mon, 01/19/2026 - 13:23 Drupal

Drupal’s shift to component based styling has been a much welcome change to how we plan out and organize a Drupal theme. While this has gone a long way in helping reduce duplication within our styles, figuring out the best way to apply the component styles to Drupal structures can sometimes be a challenge. Modules like SDC: Display are beginning to bridge this gap, but this doesn’t address every use case. One such scenario is applying component styles to a Views list.

No One Size Fits All

Let’s imagine that we have a simple grid component in our design system. This simple component accepts props for the items and column_count. This simple grid is going to be used all throughout the project and specifically across multiple View displays. There are a couple of common approaches to tackle this.

Apply the Styles Globally

One option is to make the grid styles globally available, then use something like the HTML List plugin provided by Views to add the classes we need through the UI. While it keeps all of the config in Drupal’s UI where other developers might first look to replicate and make changes, we’ve made styles global that we likely don’t need on every page. It also requires detailed knowledge of the design system to know which CSS classes to apply and where to apply them to accomplish the desired styles.

Override View Twig Templates

Another option is to move all of the styling to the Twig templates for those View displays. For example, on a list of blog posts we could create a template like views-view-unformatted–blog-posts.html.twig. Once we have that template, we call our grid component:

{{ include('my_theme:grid', { items: rows|map(row => row.content), column_count: ‘4’, }, with_context = false) }}

Now anytime we want to apply this to a new View, all we have to do is copy/paste this into a template for that View and update any needed props. The drawback here is that the Views UI won’t reflect where the styles are coming from, or how to recreate them, and a front-end developer familiar with the theme would have to do the copy/pasting. 

Moving Back into the UI

Creating a custom Views style plugin gives us the best of both worlds. We still can leverage all the benefits of SDC, but expose all the options to site builders to apply to Views keeping the UI intact with the output. Let’s walk through the steps to get it done.

To start, in a custom module, create a new style plugin within \Drupal\my_module\Plugin\views\style. This plugin should extend \Drupal\views\Plugin\views\style\StylePluginBase. At its simplest, all a style plugin needs is:

<?php   declare(strict_types=1);   namespace Drupal\my_module\Plugin\views\style;   use Drupal\views\Plugin\views\style\StylePluginBase;   /** * Style plugin to render results in a grid. * * @ingroup views_style_plugins * * @ViewsStyle( * id = "my_module_grid", * title = @Translation("Custom Grid"), * help = @Translation("Render results in a grid layout."), * theme = "views_view_my_module_grid", * display_types = { "normal" } * ) */ class MyModuleGrid extends StylePluginBase { }

This will provide a new option when configuring a View style called “Custom Grid” without extra configuration. It will output the View rows through a new Twig template defined in the theme option within the annotation. In our case: views-view-my-module-grid.html.twig.

Before getting into the markup, we need to align our new template with other View style plugin templates:

/** * Implements template_preprocess_HOOK(). */ function my_module_preprocess_views_view_my_module_grid(&$variables) { template_preprocess_views_view_unformatted($variables); }

From there you can customize your new template as you would a standard views-view-my-unformatted.html.twig. template.

Adding Configuration to the Plugin

Currently the way our style plugin is set up assumes we want it to work the same everywhere. However most front-end systems like SDC allow components to be modified by passing props into them. For a grid component, allowing for the number of columns to be configured would give this component more flexibility. We can easily choose which props are exposed in Views to site builders.

First, we modify our plugin class:

class MyModuleGrid extends StylePluginBase {   /** * {@inheritdoc} */ protected function defineOptions(): array { return parent::defineOptions() + [ 'column_count' => ['default' => '3'], ]; }   /** * {@inheritdoc} */ public function buildOptionsForm(&$form, FormStateInterface $form_state) { parent::buildOptionsForm($form, $form_state);   $form['column_count'] = [ '#type' => 'select', '#title' => $this->t('Column Count'), '#description' => $this->t('The number of columns for the grid to use on desktop.'), '#default_value' => $this->options['column_count'], '#options' => [ '3' => $this->t('Three Columns'), '4' => $this->t('Four Columns'), ], ]; }   }

The ::defineOptions() method here provides the default settings for what our form will control. In this case, setting the column_count to 3. ::buildOptionsForm() does exactly what it sounds like: it creates the form that site builders will interact with.

Next, we need to make these options available to our Twig file. To do this, we update our preprocess function from earlier:

function my_module_preprocess_views_view_my_module_grid(&$variables) { $options = $variables['view']->style_plugin->options; $variables['options'] = $options;   template_preprocess_views_view_unformatted($variables); }

Finally, we update our Twig template to use the new options:

{{ include('my_theme:grid', { items: rows|map(row => row.content), column_count: options.column_count, }, with_context = false) }}Scratching the Surface

With all this in place, we can now update our Views to use this new Style plugin. And anyone with a basic understanding of Drupal views can recreate these styles when setting up new Views. There’s a lot more that can be done with Views style plugins. What other Views style plugins functionality do you want us to demonstrate? Let us know in the comments.
 

James Nettik

Talking Drupal: Talking Drupal #536 - Composer Patches 2.0

Today we are talking about Patching Drupal, Composer, and Composer Patches 2.0 with guest Cameron Eagans. We'll also cover Configuration Development as our module of the week.

For show notes visit: https://www.talkingDrupal.com/536

Topics
  • What is Composer Patches 2.0
  • Exploring Community Dynamics in Composer Patches
  • The Genesis of Composer Patches
  • The Decision to Use GitHub
  • Broadening Composer Patches Beyond Drupal
  • The Evolution to Composer Patches 2.0
  • Understanding Workflow Complexities
  • Refining User Experience in 2.0
  • New Features and Enhancements in 2.0
  • Navigating Controversial Changes in 2.0
  • The Role of Dependency Patches
  • Introducing patches.lock.json
  • Best Practices for Patch Management
  • Transitioning to Git Patching
  • Exploring New APIs in Composer Patches 2.0
  • Understanding Capabilities and Events
  • Transitioning to Composer Patches 2.0
  • Future of Composer Patches and Community Contributions
Resources Guests

Cameron Eagans - cweagans.net cweagans

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Andy Giles - dripyard.com andyg5000

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

  • Brief description:
    • Do you maintain modules that provide configuration files? There's a module that can help manage them.
  • Module name/project name:
  • Brief history
    • How old: created in Apr 2014 by chx, though recent releases are by Joachim Noreiko (joachim)
    • Versions available: 8.x-1.11, which works with Drupal 9.3, 10, and 11
  • Maintainership
    • Actively maintained
    • Security coverage
    • Test coverage
    • Number of open issues: 36 open issues, 7 of which are bugs
  • Usage stats:
    • 2,391 sites
  • Module features and usage
    • The module really provides three useful features. First, it can ensure specific configuration files are automatically imported on every request, as though the contents were pasted into the core "single import" form
    • Second, it can automatically export specific configuration objects into files whenever the object is updated. You provide a list of filenames and the module will derive the objects that need to be exported.
    • Finally, it provides a drush command that can be used to generate all the necessary configuration files for a specific project. You put a list of the files into the project's info.yml file, and then with a single command a fresh copy of all the specified files will be generated and placed directly into the project's configuration folder.
    • For obvious reasons this is not something you should ever have enabled in production, so definitely a best practice to pull this in using the require-dev composer command

The Drop Times: Getting Set for More

With 2026 underway, Drupal core has finalised the platform requirements for its next major release, Drupal 12, setting a clear technical direction. The minimum requirements now include PHP 8.5 and recent stable versions of key databases—MySQL 8.0, MariaDB 10.11, PostgreSQL 18, and SQLite 3.45—signalling alignment with upstream standards and long-term ecosystem readiness.

These changes aren’t arbitrary: they reflect Drupal’s maturing release policy, which increasingly coordinates with upstream lifecycles to minimise friction and technical debt. The shift to development on the main branch of core simplifies contribution and release management, while release planning threads outline multiple windows—mid-June, late July, or December—depending on when beta readiness is achieved.

With active development on Composer 2.9+ and updated deprecation tooling already in place, 2026 offers an opportunity for developers and site owners to plan methodically. By aligning hosting stacks and project roadmaps early, teams can prepare for a smoother transition to Drupal 12 when it arrives. The support window for Drupal 10 remains open through December 2026, offering a stable fallback as the next cycle unfolds.

DISCOVER DRUPALEVENTSDRUPAL COMMUNITYBLOGORGANIZATION NEWS

We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now. To get timely updates, follow us on LinkedIn, Twitter, Bluesky, and Facebook. You can also join us on Drupal Slack at #thedroptimes.

Thank you.

Kazima Abbas
Sub-editor
The DropTimes

#! code: Drupal 11: Finding A Better Way To Display Code Examples

I've been running this site for about 18 years now and the code I post has been in much the same format since it was started. The code is rendered onto the page using a <code> element and (mostly) syntax highlighted using a JavaScript plugin.

I make use of sites like Codepen and JSFiddle quite a bit, and often link to those sites to show examples of the code in use. Those sites got me thinking about the static nature of the code examples on this site. I have been writing more front end code recently, but static code examples aren't the best way of showing these features in action. I can (and have) uploaded images and gifs of the feature in action, but those images are many times the size of the code examples in question and serve only to bloat the page.

What I would really like to do is allow active code examples, or a code sandbox, to be injected into the page. This would allow users to interact with code examples rather than them just being static. Clearly a valuable learning tool for any site.

I know that it's possible to embed Codepen examples into a page, but not only does that require a premium subscription, it also creates a disconnect between the code and the content on the site. I wanted a solution that would allow me to write the article and the code examples all within the back end of the Drupal site.

Hosting code examples on a third party site also comes with some risk as if that site went offline then all of the code examples on my site would stop working. By self hosting I can make the editing experience better and also ensure that everything works correctly.

What I needed for the site now was some form of code sandbox that could be used to demonstrate simple JavaScript and CSS code without being tied to a third party supplier. I therefore did some searching around to find a suitable container for the code.

Read more

Pages