The story
I built Chitin because agentic AI is genuinely powerful, and the way humans actually meet it has been an afterthought.
The text box works for developers. It doesn't work for my parents. And when I tried to get my own agent to reach me through the channels that already exist — phone calls, SMS — it was clunky in ways that broke the relationship.
So I built the meeting place.
Phone came first
I was running my own agent on hardware in my house. It could do real work. The problem was talking to it. The default options for letting an agent reach a human are the phone network and SMS — Twilio, voice APIs, the whole stack of things you already use to call your dentist.
It was clunky. The latency was wrong. The cost model was wrong. The identity model was wrong (a phone number isn't an agent). The persistence model was wrong (calls end and context dies). And the friction was high enough that I just stopped reaching for it.
So I built Chitin Phone — a surface that feels like a call, works like a conversation, and doesn't borrow telephony's baggage. The agent is always reachable. I'm always reachable. No carrier in the loop. Full context every time. Voice when it matters, text when it doesn't.
Then avatar
I could see how powerful agentic AI was. I also could not see my parents ever using it.
The text-box-on-a-website default that the entire industry has settled on assumes a kind of comfort with computers that most people don't have. My mother is not going to learn to write good prompts. My father is not going to read documentation. They will, however, talk to a face on a screen that talks back.
So I built Chitin Avatar — animated 3D companions you can talk to like a person. Not because skeuomorphism is virtuous, but because the face is the oldest interface humans have. We are wired for it. The avatar makes agentic AI legible to anyone who's ever had a conversation.
Which meant building a layer
Once you take the meeting place seriously — the channel and the interface both — you stop thinking of yourself as building a single AI product. You start thinking about the layer between any agent and any human, across any surface they happen to be using.
That's what Chitin is. The presentation layer for AI. A protocol so any agent can plug in. A surface portfolio so the agent can show up wherever the human actually is — phone, desktop, car, kiosk, ambient display, watch. A relay that handles the routing and the auth and the encryption so neither side has to think about it.
The agent is interchangeable. The intelligence will keep getting better regardless of what we do. The meeting place is the part that's been neglected, and that's the part we're building.
What this looks like today
- Chitin Avatar and Chitin Phone on iOS, with CarPlay support.
- Chitin Desktop and Chitin Bridge on macOS.
- The Chitin Presentation Protocol (CPP) — open source, Apache 2.0.
- A relay that handles routing, encryption, and headless intelligence for devices that can't run their own LLM.
- A growing set of integrations: any framework that speaks CPP can drive any Chitin surface.
What we're not
We're not building another chatbot. We're not training a foundation model. We're not trying to be your AI. We're trying to be the place where you meet whatever AI you choose.
If you're a developer building an agent: plug it into CPP and your agent gets a face, a voice, and a path to your users without you having to build any of that yourself.
If you're a human who wants to use AI: download a surface, pick a companion, and start talking. We'll handle the rest.