Phase II Technology

I've Seen This Movie Before.

I've Seen This Movie Before. kdavis Thu, 04/16/2026 - 14:23

Back in October 1992, I walked into the Moscone Center in San Francisco for Interop, one of the biggest networking conferences of the era. I'd seen a lot of tech demos before, but one stopped me in my tracks.

A small booth was showing something called 10Base-T (Ethernet over ordinary Cat-5 telephone cable). Two computers, quietly exchanging data, surrounded by a deliberately hostile environment: electric motors running, vacuum cleaners humming, devices designed to generate every kind of electrical interference imaginable. And the data transfer was rock solid.

It doesn't sound like much today. But what I understood in that moment was that Ethernet had just escaped the lab. It no longer needed expensive, cumbersome thick coaxial cable snaked through walls by specialists. Now it could run over the same phone wiring already in every office building in the world. That demo didn't just show a better cable. It showed the beginning of the Internet revolution: the moment a transformative technology became something anyone could use, anywhere.

I hadn't felt that feeling again in thirty-plus years.

Until last week at HumanX 2026.
 

 

Welcome to the Next Revolution
HumanX brought together a remarkable cross-section of AI's most consequential voices: thousands of leaders, all under one roof at the same Moscone Center from Interop 34 years ago. The speaker roster read like a who's who of the AI world: Fei-Fei Li, Matt Garman, Bret Taylor, Andrew Ng, Ali Ghodsi, Vinod Khosla, Al Gore, Ray Kurzweil, and voices from Snowflake, Anthropic, NVIDIA, Perplexity, Zoom, Cursor, Salesforce, and dozens more. Speaker after speaker, I kept having the same déjà vu: agentic AI has been talked about for a while now, but this felt like the point where it stops being a topic and starts being a reality.

Not AI as a chatbot or a co-pilot that helps you write emails faster, but AI as an autonomous agent: something that can take a goal, connect to your systems and data, make decisions, take actions, and deliver results, all without a human clicking through each step.

The parallels to 1992 are striking. Just as 10Base-T democratized networking by making it accessible and practical, agentic AI is doing the same for intelligent automation. The question is no longer can AI do this? It's how do we actually deploy it at scale?

The Signal Through the Noise
Across every session and panel, a few themes surfaced over and over again:

  1. Agents are where the value actually lives. Every major platform company (Salesforce, AWS, Vercel, Sierra) agreed that generative AI's biggest ROI isn't in content creation. It's in agents that take action inside real business workflows: customer service, sales prep, code generation, HR processes. The companies winning are the ones moving from "demos" to production agents doing real work at scale.
  2. Adoption is a culture problem, not a technology problem. The companies successfully scaling AI aren't the ones with the best models; they're the ones that normalized AI use across every role, embedded it in existing workflows, and gave employees permission to experiment. The biggest adoption killers? Cultures where you have to be the smartest person in the room, or where using AI feels like admitting weakness.
  3. The workforce reskilling challenge is urgent and underestimated. Andrew Ng made a bold call: everyone should learn to code. Not because they'll write code by hand, but because AI makes it possible for non-engineers to build things, and those who can will dramatically outperform those who can't. Meanwhile, the gap between the pace of AI change and universities' ability to update their curricula is widening fast.
  4. Humans need to stay in the loop. Despite all the agentic enthusiasm, speaker after speaker was clear: humans still need to own the decisions that matter. AI handles the rote, the repetitive, the data-gathering, but accountability, judgment, and empathy remain human responsibilities. "People plus AI is a new way to work" was practically the conference motto.
     

Data Is the Hardest Problem, Still
If there was one problem that came up again and again, sometimes directly and sometimes quietly lurking under the surface, it was data. Specifically, the challenge of connecting AI to the right data, in the right context, with enough quality and trust to actually act on it.

The "garbage in, garbage out" problem is very real in the agentic era. When an agent can autonomously cancel an order, qualify a loan, or generate a report, bad data doesn't just produce a wrong answer, it can trigger a costly wrong action.

Several distinct data challenges kept surfacing:

  • Most enterprise data is unstructured and hard to use. The good stuff isn't in clean spreadsheets. It's in videos, PDFs, Slack threads, email chains, and Confluence pages. RAG search and vector databases help, but they don't fully solve the problem of extracting reliable, contextual intelligence from this kind of data at scale.
  • Data is siloed, and quality varies wildly. Agents need context from a dozen systems simultaneously, but those systems don't naturally talk to each other. And even when you can connect them, the quality of what's in them matters enormously.
  • Data governance adds another layer of complexity. Not everyone should see everything, and neither should every agent. Different people have different access levels across different systems, and when an AI tries to synthesize information across all of them, enforcing those boundaries while still delivering useful answers is a genuinely hard problem. It's not just a technical challenge; it's an organizational and legal one too.


I've been working through exactly these challenges firsthand. Our team has been building what we call an Intelligence Layer for client projects: an agent that connects to the full project ecosystem (Slack, Google Drive, Jira, GitHub, Salesforce, and more). The goal is to give anyone on a project team the ability to ask natural language questions about project status, technical decisions, client context, and get accurate, grounded answers.

It works remarkably well, until you hit the data relationship problem. When the same information exists in multiple systems, which source is authoritative? If Jira says a ticket is closed but the related GitHub PR is still open, what does the agent say? If a client question was answered in a Slack thread and later updated in a Google Doc, which is current? These aren't AI problems; they're data integrity problems that AI inherits.

Some vendors are tackling this directly. DevRev's "Computer" product, for example, is built around the concept of Computer Memory: a unified, AI-ready layer that ingests data from across your tools and systems (structured and unstructured) into a single source of truth that agents can query and act on. It's an approach I’m watching closely.

I’m Here For It
HumanX 2026 felt like Interop 1992: the moment a transformative technology became practical, accessible, and unstoppable. Agentic AI is no longer a research project or a vendor pitch. It's running in production at companies around the world, doing real work, at real scale.

Not everything you see today will survive. Not every agent platform, not every AI startup, not every use case will make it. That's fine. The dotcom era gave us a lot of Pets.coms, but it also gave us Amazon.

The question for every practitioner and leader isn't whether to engage with agentic AI. It's whether you're going to be in the driver's seat when the revolution arrives, or scrambling to catch up after it passes you.

I know which side I plan to be on.

Interested in talking more about the Intelligence Layer we're building, or how AI agents can be applied to your business?

I'd love to connect.

Publication Date Thu, 04/16/2026 - 14:04 Mike Potter Principal Engineer, CMS

Mike’s career started as an experimental neutrino particle physicist before creating the first WWW home page for Los Alamos National Laboratory. Mike has extensive experience architecting, designing, and overseeing the implementation of many complex enterprise solutions for Phase2, and also architected and led the development of the Open Atrium collaboration framework product.

Featured Blog Post? Yes Has this blog post been deprecated? No Summary HumanX brought together a remarkable cross-section of AI's most consequential voices: thousands of leaders, all under one roof at the same Moscone Center from Interop 34 years ago. Topic Artificial Intelligence HumanX Blog Banner Image Promo Image

Apoca-optimism: Notes from SXSW

Apoca-optimism: Notes from SXSW cloos Wed, 04/01/2026 - 11:25

South by Southwest, SXSW, or simply "south by": no matter how you say it, Austin hosts a one-of-a-kind festival, boasting celebrities, music, art, and cutting edge innovation splashed across downtown.

On the other side of it, I find my brain stuffed with that new-things goodness that only brilliant people having inspiring conversations can bring. And tacos. Really great tacos.

I can't share the tacos with you all, but I can pull on a few mental threads. Because across very different sessions, from biotech to product strategy to design measurement, I kept hearing the same thing underneath it all: the old ways of knowing what's real, what's valuable, and what's possible are breaking down. And the people who will thrive are the ones learning to navigate by conviction rather than certainty.

There's a word for that feeling. I heard it somewhere in the blur of south-by, and it stuck: apoca-optimism. One of those phrases that makes you go "Yeah. YEAH. That's it exactly." In a world spinning on a tilt-a-whirl of changes and AI upheaval, it's hard to look at what's coming without some sense of dread. Of a massive and imminent ending. But also... maybe something beautiful too? The weird and wild and wondrous things at our feet right now. A raw abundance of possibility.

That tension, between ending and beginning, and the overwhelm of navigating it, ran through everything I heard and saw.

The impossible, now merely difficult

Decoding Nature: How AI is Learning to Program Biology

Take the collaboration between Basecamp Research, Microsoft, and UPenn. Together, they've built an LLM that doesn't speak in human language. It speaks in the language of life itself: DNA. The questions being asked of their model, EDEN, are uncovering new antibiotic targets for an increasingly drug-resistant host of diseases. And the accuracy is staggering: 95% hit rate in predicting antimicrobial function.

Getting there was no meager task, and absolutely not "vibe code." The raw data for such a project was missing, simply not enough sequences to train on. Scientific publications aren't like the rest of the internet. They contain only the end product of thought: years of work distilled into a single paper. For a model, this is like learning to speak English by only hearing the last word of every conversation. Validation was its own problem: you can spot a mangled sentence in a heartbeat, but can you spot a mangled protein? And DNA itself is not a clean language; it's riddled with inconsistencies and "junk" sequences.

But here's the thing: these problems are now merely difficult.

Much ado is made of AI's leaps towards greater efficiency. In essence, being better at familiar flavors of busy. And those improvements are genuinely revolutionary: changing the equation of effort shatters everything from engineering to law practice. But projects like EDEN aren't just doing difficult things more easily. They are doing what was previously impossible.

Hearing smart people share about the miraculous work they've done, sitting fifty feet away, talking to a room full of people eagerly taking notes... there's something contagious in that.

Prospectors and prospecting

How to Build AI-First Products: Models, Memory, Mastery

Not everyone had stories of miraculous change. There was also a sober sifting of the meaningful from the hype. I particularly appreciated this session, because it asked the multi-million-dollar question: in a gold rush, how many prospectors actually strike gold?

There can be little doubt that hype is in abundance. Much like the early ages of the internet or mobile devices, there's a sense of urgency to "just add AI." But in the scramble to not be left behind, some efforts are not just pointless, but quite costly. Remember Jasper AI, the content-writing darling? Mountains of seed money, and then the foundation models simply got better and swallowed the value proposition whole. Or BloombergGPT: millions in investment, rendered obsolete in months when GPT-4 not only matched but outperformed it.

We're far enough into this era that the blunders have had time to mature and be plucked. So what separates the products that endure from the ones that get swept away?

The ground moves fast when models improve faster than your product roadmap. Durability doesn't come from wrapping AI in a pretty shell, or specialized training. It comes from building something that foundational models can't have and competitors can't easily catch up to. The model is not your moat, the data it’s built on is.

Directionally rigorous, not falsely precise

Beyond Beautiful: A Data-Driven Framework for Design ROI

Every day we're asked to make decisions faster, with more data, and higher stakes. So how do you act with conviction when the ground won't stop moving? I found that satisfyingly missing puzzle piece in a session on measuring the real ROI of design. On its face, a brass tacks topic: how do you talk the budget people into letting you do beautiful things? But the deeper message was the one that tied everything together for me.

The presenters had built an actual formula for predicting design's fiscal impact, scoring problem severity, design influence, and execution quality to estimate return on investment. What struck me was that the most important thing about it wasn't the math (which was pretty cool). It was the posture. The willingness to say: we can't prove this precisely, but we can prove it directionally, and that's enough to act on.

Their phrase for it was perfect: directionally rigorous, not falsely precise.

In a world where data has never been cheaper or more abundant, I think this is essential framing. Humans, and our AI agents, make surprisingly poor decisions in information-rich environments. We cherry-pick what already proves what we want. Call it cognitive bias or context poisoning; it's the same root issue. By letting go of the false promise of precision and more-is-more thinking, and focusing on the harder to measure shape of truth, we can gain actual insight. We may not have all the data, but we usually have order of magnitude understanding. Our six-figure design updates are solving an eight-figure problem. Let’s stop worrying about the precision on our estimates. 

This applies far beyond design. It's the same discipline that separates the durable AI product from the flash-in-the-pan one. It's the same instinct that let the EDEN researchers push forward without clean data or easy validation. Knowing you can't be exactly right, and building anyway. With rigor, with humility, with direction.

What I brought home

There are more intertwining threads from SXSW than would fit here. But these were the ones I carried out of the murmur and burble of downtown Austin:

The impossible is now merely difficult and our old sense of what's "realistic" can no longer be trusted. A gold rush is underway, and many prospectors will fail because they're chasing the first sparkle, instead of getting real about where to focus. And in all of it, the skill that matters most is learning to be directionally right rather than precisely comfortable.

The world is terrifying and extraordinary. The people who showed up at SXSW aren't pretending otherwise. They're learning to build in the turbulence.

That, and the tacos. The tacos were really something.

Proudly written with editorial assistance from my good buddy Claude.

 

 

Publication Date Wed, 04/01/2026 - 11:25 Caroline Casals Software Architect

Caroline is an Acquia-certified Site Developer and Acquia Approved Site Studio 6 Site Builder who is one of our most passionate technical consultants.

Featured Blog Post? Yes Has this blog post been deprecated? No Summary South by Southwest, SXSW, or simply "south by": no matter how you say it, Austin hosts a one-of-a-kind festival, boasting celebrities, music, art, and cutting edge innovation splashed across downtown. In a world spinning on a tilt-a-whirl of changes and AI upheaval, it's hard to look at what's coming without some sense of dread. Of a massive and imminent ending. But also... maybe something beautiful too? The weird and wild and wondrous things at our feet right now. A raw abundance of possibility.
Topic Artificial Intelligence Web Banner Mint.png Promo Image