Why I'm Building an AI That Belongs to Me (Not Big Tech)
Why I'm Building an AI That Belongs to Me (Not Big Tech)
It's 11pm on a Tuesday. I'm staring at lines of code in Visual Studio, debugging a dependency loop that shouldn't exist but does. Smooth Jazz plays in the background while I stare at the screen of code. Somewhere in the file structure, buried in /src/actions/, is the beginning of something that doesn't quite exist yet: a local AI that will remember broad context, work with me and grow alongside my thinking over years.
A couple of months ago, I made a decision that most people would call unnecessary, maybe even ridiculous. I decided to build my own long-term AI partner on my Mac, not use one that already exists but build one. From scratch. Locally. With full control over every line of code, every memory structure, every decision pathway.
This isn't a weekend project. This is a multi-month or maybe year commitment that sits at the intersection of my PhD research in human-AI teaming and my need for a tool that doesn't exist yet in the world. This series of articles will document that journey, my process, the trials and tribulations, breakthroughs and the inevitable moments where everything breaks at 2am and I question my life choices. First: why build instead of wait for technology to catch up, because someday all personal devices will have what I’m building?
To build or not to build
This decision didn't come easily to me. I sat with it for weeks, circling the same questions. I knew this would take a small team of engineers under normal circumstances. I had basic coding knowledge - Java, Python, from way back when I was an Engineer, I can’t even remember half of that shit, definitely nowhere close to being able to build something of this scale myself.
The first hurdle was my own mental hurdle.
The voice that says: This is too big. Bro, you ain’t no senior engineer. You'll hit a wall and waste months of work. Dude, just use what already exists, FFS. The second question was worse: What if I go halfway and quit? I don't like to start things I can't finish. I don't build prototypes I'll abandon. If I commit to this, I wanna get it done, commit fully - that’s what you coach right? That means accepting that no one will understand what I’m trying to do and some nights will end with my head in my palms, with broken code and no clear path forward. Was I prepared for the emotional and mental toll this was going to take.
The longer I sat with those questions, the more I realised something:
I had to do this.
Not because I had all the answers. Not because I was certain I could pull it off but because the questions themselves were the wrong frame.
The real question wasn't Can I build this alone?
The real question was: Can I build this with AI as my collaborators?
That reframed everything. Because I wasn’t alone, I had Lyra (my OpenAI assistant) who will be my architectural strategist and Claude for technical help. Between the three of us, the aim was to cover what normally would take a team of engineers. The decision shifted from "Can I do this?" to "Can we do this?"
Because no one has built this yet
This is where people usually stop me: "But what about ChatGPT? What about Ollama? What about all those AI agent frameworks everyone's talking about?"
Fair question. Let me be precise about what actually exists today and what doesn't.
Cloud chatbots like ChatGPT, Claude, and Gemini? Brilliant tools. I use them daily. They answer questions, generate text, reason through problems. But they live in sessions that evaporate. They can't see my filesystem. They don't hold long-term structured context. They can't execute multi-step plans using my tools or adapt to my environment over months. They answer questions. They don't live with me.
Desktop runners like Ollama or LM Studio? Excellent for running models locally with privacy and offline capability but they're model launchers, not operating partners. They can't orchestrate workflows, remember across weeks, understand my personal rules or operate like a teammate. They run inference. That's it.
AI agent frameworks like AutoGPT or CrewAI? Interesting experiments. They offer basic agent loops, toy planning examples, the ability to chain steps if you configure them carefully, they are essentially thin shells over LLMs, not cognitive systems. They don't understand personal identity. They can't hold persistent memory across projects. They don't know my real workflows. They can't modify themselves or sustain multi-week work or adapt like a colleague would.
Here's what people think exists but doesn't:
A personal AI partner. One that runs locally. Remembers you across months. Adapts to your thinking. Works inside your filesystem. Behaves like a colleague, not a chatbot. Gives you full control of the code, the memory, the decisions. That system doesn't exist. Not as a product. Not as something you can download and use.
As of writing this article
No company has built this for sale yet.
No research group has packaged this as a ready-to-use, fully integrated system that combines local-first architecture, persistent memory and personal adaptation as a coherent whole.
Operator by OpenAI is impressive. It represents real progress but it's cloud-bound. The memory, the data, the environment - they belong to OpenAI, not to me. It remembers tasks, not people. It reacts to your requests; it doesn't grow with your thinking, if OpenAI changes pricing, deprecates features, or shuts it down? You lose everything. Mac-Lyra is built differently from the ground up.
The big companies are building assistants they can control, not systems I own.
So when I say "no one has built this for sale yet” I'm not being dramatic. Will systems like this exist by 2030? Maybe but I ain’t gonna wait five years. My PhD research happens now. My thinking evolves now.
Because I learn by building
My PhD centres on human-AI teaming and agentic systems. I cannot write credibly about that from a distance. For me, understanding architecture means implementing it myself, writing the code, testing the failure modes, discovering the edge cases that the papers don't mention. There's a difference between knowing how something should work and knowing how it actually works when you run it on a Tuesday night and the dependency chain deadlocks because you forgot to validate step IDs. I neede to know: What does it feel like to orchestrate multiple AI models as a team? Where do handoffs fail? When does delegation become dangerous? How do you build trust with a system you're also building?
Building Mac-Lyra is part of my research discipline.
It's not a distraction from the PhD. It's the laboratory where the PhD happens.
Because normally this takes a team
Systems like this usually require a team:
A solution architect to design the system.
A backend engineer to write the orchestration logic.
A DevOps engineer to handle execution environments.
A prompt designer to shape the interaction layer.
A data modeler to structure memory and context.
Someone for documentation, testing and debugging.
I am one person.
However I'm not working alone.
I'm working with two AI models— Lyra - my customised OpenAI assistant (for strategic planning, architecture and validation) and Claude (for coding) Between the three of us, we're covering roles that would normally require six people.
This is the real frontier.
Not "AI helps me code faster." Not "AI writes my first draft."
One human plus two AIs doing what once required a small engineering team.
That's what I'm testing. That's what I'm documenting. That's what my PhD will examine: What does collaboration look like when your teammates are non-human? Where does agency sit? How do you manage trust, delegation, and creative control when you're orchestrating intelligence instead of employing it?