AI at nerdgeschoss

When we do our work, we value humans. We value human ingenuity, creativity, and experience. AI is just a tool to support us, and all of this is based on trust, to each other and to our customers.

What we believe

AI does not have taste. It writes code that compiles and sentences that parse. None of that is the same as knowing what your product should be or why a feature that looks right is wrong. Taste lives in the plan. The plan is human work.

AI augments, it does not replace. Human interaction is what we value. We use AI to help the people doing the work, not to stand in for them.

Be the driver, not the passenger. The thing that erodes a developer's skill is not using AI. It is accepting AI output without thinking. Steer it. Push back when the plan is wrong. Prompt and hope, and you will ship mediocre code and get worse at your job. Not here.

How we write code

You own what you push. If your name is on the commit, the code is yours. "Claude wrote it" is not a defence.

You would have written it the same way. Read what you are about to submit. If the shape is wrong or the naming is off or the approach is not yours, fix it. Do not push code you disagree with because Claude generated it. Reviewing slop is the fastest possible way to make your colleagues hate you.

You could have produced it yourself. If the code is over your head, you cannot verify it. If you cannot verify it, you cannot ship it. Read it, explain it to yourself, and if the explanation runs out halfway through, stop. That is the part that is going to break in production in six months, and you are the one on call.

AI code review tools are fine as a self-check before pushing. They do not count as a real review. A human looks at the code, or the PR does not get merged.

Working on customer code

Most of our code is not ours. When we use AI in it, we are sending pieces of our customer's work to a provider, and that is not a decision to make silently. So we ask, and we write it down. The project page in the nerdgeschoss app tells you whether the customer opted in. If they opted out, you code the old-fashioned way.

We use Anthropic's Claude, which means some of that code goes to American servers. Sadly, that is currently the only option that is actually good enough to be useful.

What we do with customer data

Code can go to a cloud LLM, with opt-in. It usually does not contain personal data.

Meeting transcripts require opt-in from everyone on the call. We ask clients during onboarding and employees when they join, and we write down the answer. If a single person says no, the call is not transcribed. When transcription is allowed, it runs on local software that does not touch the internet.

Production data does not touch an LLM. Ever. The most dangerous thing you can do at this company is log in to a production database with admin credentials and point Claude Code at it to "help you look into something." The model sees bank account numbers, birth dates, addresses, everything, and it is in Anthropic's logs within seconds.

Some examples of things that do not go near a cloud LLM without explicit consent from everyone involved:

  • NDAs and other confidentiality agreements
  • Signed contracts
  • Credentials, tokens, API keys
  • Customer financial or medical data
  • Employee personal data

How we work with each other

We read each other's daily nerds. We sit through each other's one-on-ones. When a colleague writes about their day, the whole point is that you read their words. The actual ones, not a cleaned-up summary. Replace the reading with AI-written digests and you have replaced the culture with something that looks a bit like the culture from a distance.

Meetings are a different story. We transcribe them (remember to check that everyone on the call is ok with that!) and we summarise them, our own and customer meetings alike. After shaping sessions, sprint reviews, or planning meetings, there is almost always context that does not end up in the ticket: constraints mentioned in passing, decisions that took ten minutes of back-and-forth to reach, a sentence that turns out to matter later. When a customer tells us what they want to achieve, the exact words they used are sometimes what we need to check whether we actually built the right thing.

Summaries get written right after the meeting and reviewed the same way we review code. Whoever commits a summary owns it. It does not matter whether the summary was drafted by AI or written by hand; the person putting it in the vault is responsible for it being correct.

Tooling that helps you find things is fine. Semantic search across meeting notes, full-text search across daily nerds. The AI gets you to the content faster. You still read the content.

Talking to customers

Sometimes we use AI to help draft customer communication (status updates, release notes, first drafts of proposals, whatever). Sometimes we don't. That is up to the person writing it.

Either way, the thing the customer receives is sent by a human who read it and is willing to put their name on it.