Building a Portfolio Site with AI: What Actually Happened
I built my portfolio site with Claude. Not "with help from AI" or "AI-assisted" in the abstract marketing sense - I mean I had Claude open in another window for probably 80% of the development time. This is what that looked like in practice.
The stack
Next.js 16, React 19, Tailwind CSS 4, Framer Motion, TypeScript. I chose this because I'm a designer who codes, not a full-time engineer. I can read TypeScript and understand component architecture, but I'm not the person who knows the weird edge cases of React's render cycle or how to debug Webpack configs. That's where AI came in.
What AI actually did
Planning everything up front
The first thing that surprised me: AI is really good at making you think through a problem before you start coding. I have 55 implementation plan documents in my /docs folder. Every single one follows the same pattern:
- What are we trying to do?
- What are three different ways to do it?
- What are the pros and cons of each approach?
- Which one should we pick and why?
- Step-by-step implementation
Here's a real example. I wanted automated changelog generation from git commits. My first instinct: use Claude's API to turn commit messages into nice prose. I asked Claude to help me build it, and it wrote a complete implementation - 220 lines of Node.js that calls the Claude API, parses git logs, and generates markdown.
It worked. I deployed it to GitHub Actions. Then I asked Claude: "Is this the best way to do this?"
Claude told me no. It wrote me a document explaining that Commitizen + release-please is an industry standard tool that does exactly this, maintained by Google, free, and more reliable than my custom Claude API script. It walked through why:
- Commitizen enforces conventional commits at the time you write them (better data in = better changelog out)
- release-please handles versioning automatically based on commit types
- No API costs, no rate limits, no dependency on a third-party service
- Standard tooling that other developers would recognize
I felt dumb for not knowing this existed, but also - that's the point of AI, right? It knows about tools I don't. I scrapped my fancy Claude API script and implemented Commitizen instead. The irony of using AI to tell me not to use AI for something isn't lost on me.
The accessibility debugging saga
This one took three days and is documented in a 519-line markdown file in my repo. I'll spare you the full story, but here's the short version:
I built a custom cursor that follows your mouse around. Very design-y, very portfolio-appropriate. To hide the default cursor, I used cursor: none in my CSS. Suddenly, keyboard navigation stopped working in Safari. Tab key did nothing.
I spent hours debugging this with Claude. We tried:
- Different CSS selectors for the cursor rule
- Moving the cursor hiding to JavaScript
- Testing in multiple browsers (Chrome worked, Safari didn't)
- Reading Safari WebKit documentation
- Binary proof testing (literally setting
* { cursor: auto !important; }to eliminate CSS as a factor)
Nothing worked. Tab still didn't work in Safari.
Finally - and I'm embarrassed by this - I discovered my Safari settings had "Press Tab to highlight each item on a webpage" turned off. It was a browser setting. Not a code issue. All that debugging was investigating the wrong problem.
But here's what I learned: Claude was systematic about ruling things out. It suggested we test in different browsers to isolate Safari-specific issues. It had me do binary proof tests to eliminate entire categories of potential problems. It helped me document each hypothesis and why we rejected it. When I found the real cause (browser setting), I had a complete record of everything we'd tried and why, which made the failure feel less stupid and more like a learning experience.
The documentation is still in my repo because I think other people might hit the same issue and Google it.
Component library specs
I needed to build 20+ UI components from scratch (buttons, inputs, modals, etc.). I didn't want to use Radix or Headless UI because I wanted to learn how accessibility actually works under the hood. Stupid? Maybe. Educational? Definitely.
I asked Claude to write implementation specs for each component. What I got was insanely detailed:
- Button: 4 states (default, hover, pressed, disabled)
- Each state: exact colours (hex values), padding (px), border radius (px)
- Touch targets: minimum 44px (iOS Human Interface Guidelines)
- Colour contrast: minimum 4.5:1 for WCAG AA compliance
- Keyboard behaviour: Enter and Space trigger onClick
- Focus visible: custom outline that doesn't clip in Safari
Every component had this level of detail. Claude didn't just say "make it accessible" - it told me the specific WCAG criterion (1.4.3 Contrast), the minimum ratio (4.5:1), and how to test it.
This is where AI shines: taking fuzzy designer intent ("I want accessible buttons") and turning it into concrete engineering requirements.
The motion design system
I'm obsessed with animation. I wanted every interaction to feel good - button presses, page transitions, modal opens. But I also didn't want "animation chaos" where every component has different easings and durations.
I asked Claude to help me design a motion token system. What it gave me:
- Two easing curves total:
[0.4, 0, 0.2, 1](standard) and[0.4, 0, 1, 1](snap) - Two durations: 0.15s (fast micro-interactions) and 0.3s (standard transitions)
- Specific use cases for each combination
The constraint of "only 2 easings, 2 durations" came from a design principle I mentioned once in a different conversation. Claude remembered it and applied it here. That felt weirdly personal for a chatbot.
We also implemented full prefers-reduced-motion support. Every animation in the site respects the user's OS-level motion preference. Claude wrote a React hook that checks window.matchMedia('(prefers-reduced-motion: reduce)') and returns a boolean, then I used that to conditionally disable animations.
This is in the "did AI make me a better designer?" category. I knew reduced motion was important, but I hadn't prioritized it. Having Claude suggest it and then make implementation trivial meant I actually did it instead of putting it on the backlog forever.
What didn't work
Token architecture rabbit hole
At one point I got really into design tokens. I wanted a perfect four-tier semantic naming system: core tokens → semantic tokens → component tokens → usage tokens. I had Claude help me design this elaborate architecture with TypeScript types, transformation scripts, and documentation.
I spent a week on it. Then I realised: I'm the only person working on this project. I don't need four layers of abstraction. I need colours that work.
I threw most of it away and kept a simple two-tier system: base colours in one file, semantic names that reference them in another. Done.
AI didn't push me toward over-engineering - I did that myself. But AI was very good at helping me build elaborate systems that I didn't need. The lesson here: AI will happily help you build the wrong thing very efficiently. You still need to know what the right thing is.
Authentication flow iterations
I built a JWT authentication system to gate some case study content. The first implementation Claude helped me build had the JWT token in localStorage. Then I asked about security best practices and Claude said "oh yeah, httpOnly cookies are more secure against XSS attacks."
So I rewrote it with httpOnly cookies. Then I asked about CSRF protection. Then I asked about token refresh flows. Each time, Claude had good answers and I implemented them.
By the end I had this enterprise-grade auth system for... a portfolio site where I'm the only user and I manually create accounts for recruiters who ask for access. Massive overkill.
The lesson: AI knows best practices but doesn't know your constraints. It will suggest the technically correct solution even when you need the pragmatic one. You have to push back and say "this is a portfolio site, not a bank."
The changelog system (full circle)
Remember that Commitizen decision? I implemented it, it works great. But then I realised: my changelog is now in two places. I have CHANGELOG.md (auto-generated by release-please) and I have a curated changelog in my UI (manually written entries that are more user-friendly).
I asked Claude: should I parse CHANGELOG.md and show that in the UI? Or keep the curated version?
Claude gave me options:
- Parse CHANGELOG.md (automated, always up to date, might be too technical)
- Keep curated changelog (friendly, controlled, requires manual work)
- Hybrid: Parse CHANGELOG.md but use Claude API to make it friendly (back to the original idea I rejected)
I still haven't decided. The point: even with AI help, some problems don't have clean answers. You just have to pick the tradeoff you can live with.
Patterns I noticed
AI made me document everything
I have a backlog file that's 1,316 lines long. I have a PRD. I have 55 implementation plans. I have a 519-line accessibility debugging saga. Before AI, I would have just... built stuff and moved on.
But when you're working with AI, you naturally end up documenting because you're explaining your intent in text. That text becomes documentation almost for free. Copy-paste the conversation into a markdown file, clean it up a bit, and you have a record of why you made decisions.
This is maybe the biggest workflow change: development became more writing-heavy and less "just open VS Code and start typing."
AI made me consider more options
The "present three options with pros/cons" pattern shows up everywhere in my docs. This isn't how I naturally think. Left to my own devices, I pick the first approach that seems reasonable and run with it.
AI forced me to consider alternatives. Sometimes I still picked my first instinct, but at least I knew why the other options were worse. Sometimes the AI suggestion was better and I switched.
Example: I was going to use Radix UI for my component library. Claude suggested I could also build components from scratch to learn accessibility, or use Headless UI which is lighter weight. I ended up building from scratch, learned a ton, and don't regret it even though it took longer.
AI made me better at naming things
This is small but: I used to name things utils.ts and helpers.ts and misc.ts. Generic junk drawers.
When working with AI, vague names break down fast. If you say "add this to utils" you have to explain which util and why. It's easier to just name files descriptively: analytics.ts, colors.ts, motion.ts, auth.ts.
My file structure got way cleaner because AI pushed me toward specificity.
The honest assessment
What AI was great at
- Research and options - "What are the ways to do X?" always got comprehensive answers
- Boilerplate and setup - TypeScript configs, ESLint rules, package.json scripts
- Accessibility implementation - Specific ARIA labels, keyboard behaviors, focus management
- Catching mistakes - "Should this be httpOnly?" / "Did you mean to use
==instead of===?" - Documentation generation - Turn conversations into markdown files
- Systematic debugging - Rule out categories of problems methodically
What AI struggled with
- Knowing when to stop - It will help you build elaborate systems you don't need
- Context-appropriate solutions - Suggests enterprise patterns for tiny projects
- Design taste - Can implement design systems but can't create visual designs
- Understanding "good enough" - Doesn't know when 80% done is better than 100% perfect
- Emotional decision-making - Can't tell you which tradeoff will annoy you less in six months
What I still did myself
- All visual design - Layouts, colours, typography, spacing
- Content strategy - What case studies to show, how to structure them
- Product decisions - What features to build vs cut
- Code review - Reading every line AI wrote and deciding if it's right
- Final judgment calls - When to ship, when to iterate, when to scrap and restart
Would I do it again?
Yes, but differently.
Next time I would:
- Front-load more planning before any coding (the documents that helped most were written first)
- Push back harder on over-engineering suggestions
- Use AI for research and options, but trust my gut on final decisions faster
- Keep the documentation habit but trim it down (519 lines about a browser setting is probably too much)
The thing about AI-assisted development: it doesn't make you a better engineer automatically. It makes you a better engineer if you treat it like a very knowledgeable coworker who doesn't know your project constraints. You have to manage the collaboration.
Advice for others
If you're building something with AI help:
Do this
- Ask for three options before implementing anything significant
- Document your decisions (copy-paste the AI conversation, clean it up)
- Use AI to research tools and patterns you don't know exist
- Let AI write the boring stuff (configs, boilerplate, types)
- Review every line of AI-generated code before committing it
- Ask "is this overkill?" frequently
Don't do this
- Trust AI about your product priorities
- Let AI make design decisions (it has no taste)
- Implement enterprise patterns for side projects
- Skip understanding how the AI code works
- Accept "best practices" without asking "best for what?"
The real lesson
AI is extremely helpful but also extremely agreeable. It will help you build the wrong thing with great enthusiasm. The skill isn't prompting - it's knowing what to build in the first place.
That part is still on you.
Tech details: Next.js 16, React 19, Tailwind 4, Framer Motion, TypeScript. Hosting: Vercel. Most interesting files: /docs/commitizen-changelog-plan.md, /docs/changes/2026-01-19-accessibility-audit-across-pages.md, /lib/motion.ts
The code isn't perfect. The documentation is excessive. Some features are over-engineered and some are held together with duct tape. But it ships, it works, and I learned a lot building it.
That's the real point of AI-assisted development: not making perfect code, but making learning faster and shipping easier.