# Niranta, full text > 42, Our Life's Answer. A personal blog by Nitin. This file contains the full body of every published post, in plain text, for AI tools that want the entire corpus in one fetch instead of crawling. Niranta is a personal blog exploring AI, philosophy, productivity, health, and life. The name comes from "nirvana" and "anta" (Sanskrit for "end"), a Hitchhiker's Guide reference. Everything here is written by Nitin: co-founder of Nvision Technologies (1998) and Cask Data (acquired by Google in 2018), engineer in AI and distributed systems. The writing style is analytical and personal, somewhere between Stratechery (Ben Thompson) and Marcus Aurelius. Not a media company. One voice. No ads. Site: https://niranta.blog Index (titles only): https://niranta.blog/llms.txt Sitemap: https://niranta.blog/sitemap.xml Author: Nitin --- # The Personal Engineering Org: gstack, Superpowers, and Visual-Explainer URL: https://niranta.blog/ai/the-personal-engineering-org/ Category: AI Published: May 10, 2026 Read time: 9 min Words: 2079 Three open-source tools turn a single Claude Code terminal into a coordinated engineering team, strategy, process, and visual communication, all from one session. The reason a single developer can now operate at the speed of a small product team is not that any one model got dramatically better. It is that the tooling around the model has become specialized enough to simulate the roles a real engineering org actually has, strategist, architect, designer, tester, release manager, technical writer, and to coordinate them through a single terminal. Three open-source projects stand out for me right now. gstack , by Garry Tan, gives you the team. Superpowers , by Jesse Vincent, gives you the process. Visual-Explainer , by Nico Bailon, gives you the visual communication layer. They were built independently, but they slot together cleanly because each addresses a different layer of the same problem: how does a single human ship work that survives contact with reality. This post walks through what each layer does, how they combine, and the six concrete situations where I have found the combined stack worth running. The setup itself takes a few minutes. The discipline of using all three takes longer. 01 the three layers gstack ships 23 opinionated slash commands that map to engineering org roles. /office-hours challenges every product decision before code is written. /plan-eng-review locks architecture. /design-shotgun produces UI variants. /qa runs real-browser tests. /cso performs OWASP and STRIDE security passes. /ship and /land-and-deploy handle release with safety gates. /retro writes the changelog while context is fresh. The full list reads like a virtual headcount plan. Superpowers sits a layer below. It enforces structured phases, brainstorming, planning, red/green TDD, code review, verification, and isolates work in git worktrees so an agent that goes off the rails cannot touch main . The point is not novelty; the point is that without it, even a strong agent will skip the unglamorous steps. With it, the steps stop being skippable. Visual-Explainer turns plans, diffs, architectures, and data into HTML pages, interactive slide decks, Mermaid diagrams, annotated mockups, and before/after visuals. The cost of producing a good diagram has dropped to roughly the cost of writing a sentence describing what should be in it. That changes which conversations are worth having visually instead of in prose. Each layer is useful on its own. The interesting behavior shows up when they are layered: gstack proposes the work, Superpowers executes it under discipline, Visual-Explainer makes the resulting decisions and diffs legible to people who are not staring at the terminal with you. 02 solo founder to mvp in a weekend You start with a one-line idea. The first move is /office-hours from gstack, the CEO role challenges the assumption that the idea is worth building before anything is planned. If it survives that, /autoplan moves it into structured planning, with Superpowers enforcing brainstorming and TDD pipeline discipline. The design phase is where Visual-Explainer changes the loop. /design-shotgun in gstack generates four to six annotated HTML mockup variants and an interactive user-flow diagram. You are no longer choosing between abstract directions; you are choosing between visual artifacts you can sit with for ten minutes. /plan-eng-review locks the architecture before the first commit. /qa runs real-browser checks. /ship and /land-and-deploy push to production with canary rollout. The result is not just a working MVP; it is a working MVP with pitch-ready visuals already produced as a byproduct of the build. 03 legacy code refactor without fear Refactoring a five-year-old monolith is the inverse problem. The risk is not "did we build the right thing"; the risk is "did we break something nobody documented." This is where the three layers earn their cost the most. Discovery starts with /qa-only and /cso on the existing system before any change is proposed, you want a security and quality baseline so you can tell what your work introduced versus what was already there. /plan-eng-review then maps the current architecture, and Visual-Explainer renders before/after architecture diagrams and dependency graphs from those plans. /design-review surfaces accumulated UX debt. Execution stays in worktrees thanks to Superpowers, with TDD on every change. /retro and /document-release produce visual changelogs that stakeholders can actually read. /freeze and /guard add safety gates around the deploy. The combined effect is that a refactor stops being a slow-motion gamble and becomes a sequence of small, verified, communicable changes. 04 data dashboards and analytics features Analytics features have a specific failure mode: the implementation is correct, but the chart shows the wrong number, or the right number framed in a misleading way. The fix is to make the visual representation a first-class part of the spec, not an afterthought. /office-hours validates the business question first, what decision will this dashboard inform. /design-shotgun with Visual-Explainer produces production HTML mockups and live data-table visuals with annotations, so the visual choices are reviewable before any pipeline code exists. /plan-eng-review defines the API and database architecture with visual flow diagrams. Superpowers enforces TDD on the data pipeline. /qa validates in a real browser against real data. The visuals you generate during this loop end up serving two roles: they are the implementation reference, and they are the documentation. There is no separate "we should write a doc for this" task at the end. 05 an open-source library or tool release A community release lives or dies on the quality of its README, changelog, and architecture overview. These are exactly the artifacts that get rushed. The full gstack workflow handles development. The release-specific value is in /document-release combined with Visual-Explainer producing HTML changelogs, architecture overviews, and "how it works" slide decks automatically from the actual diffs and plans. /benchmark generates performance numbers before /ship creates the PR. Superpowers keeps the codebase clean and testable through the cycle. The README and release notes end up looking like the output of a much larger team because the visual layer was already producing artifacts as a side effect of building. 06 production incident response and post-mortem Incidents at 2 a.m. are where the workflow's value compounds. /investigate drives the deep dive into the anomaly. /retro captures context while it is fresh, not the next afternoon when half of it has dissolved. /plan-eng-review in combination with Visual-Explainer produces the timeline diagram and root-cause visual, which become the spine of the post-mortem document. Superpowers' root-cause tracing skills and /learn update the knowledge base so the same incident does not need to be re-debugged six months later. The post-mortem itself ends up as a shareable HTML report with a clear timeline and root-cause visual, instead of a wall of text that no one outside the on-call rotation reads. 07 enterprise feature with security and compliance Audit-ready work has a different pace. /plan-ceo-review pressure-tests the business value before resources are committed. /plan-eng-review with Visual-Explainer produces compliance-grade architecture diagrams. /qa , /cso , and /careful form the pre-merge gate sequence. Visual-Explainer produces audit-ready security flowcharts that the compliance team can review without translation. /ship deploys with rollback triggers wired in. The work moves more slowly than a greenfield MVP, that is the point, but it also produces the artifacts an auditor expects to see, in the format they expect to see them. 08 getting started Installation is short. gstack clones into ~/.claude/skills/gstack and runs a setup script: install gstack git clone --single-branch --depth 1 https://github.com/garrytan/gstack.git \ ~/.claude/skills/gstack cd ~/.claude/skills/gstack && ./setup Superpowers and Visual-Explainer install through the Claude Code plugin marketplace. Optional layers, team mode, persistent memory via GBrain, can be added once the base is comfortable. The first slash command worth running on any new idea is /office-hours . The friction it introduces before any code exists is exactly the friction the rest of the stack is designed to amplify. 09 why three layers and not one It is fair to ask why this needs three tools rather than one bigger tool. The answer is that the three layers address problems that look similar from a distance and turn out to be different up close. gstack handles the question of what role is asking . A CEO challenge is not a code review. A security audit is not a design critique. Specialist roles produce different output because they are pointed at different concerns; collapsing them into a single generalist agent is exactly the failure mode that produces plausible-looking work with quiet gaps. Superpowers handles the question of what discipline is being enforced . TDD, worktree isolation, phased delivery, these are constraints on how work happens, not on who does it. They are orthogonal to specialist roles, and they are precisely the constraints a fast model will skip if you let it. Visual-Explainer handles the question of how the work becomes legible . Most engineering decisions are still communicated as prose because producing a good diagram used to be expensive. That cost has collapsed; the prose default has not yet caught up. The tool exists to make the visual artifact the natural output rather than the special one. Each layer is useful in isolation. Run gstack alone and you get specialist perspectives without process discipline, the audits are good, the execution is uneven. Run Superpowers alone and you get rigorous execution without challenge, the code is clean, the direction is sometimes wrong. Run Visual-Explainer alone and you get beautiful artifacts of work that did not need to be done. The combination is what makes the system feel like an organization rather than a set of clever commands. The thing that has changed is not the speed of any single keystroke. It is the ratio of decisions to artifacts. With three layers running, every meaningful decision produces a diagram, a plan, a test, a review, and a release note as a near-zero-cost byproduct. That is what an engineering team has always done. It is now possible to do it alone. --- # Peptides 101: What My Athlete Sons Taught Me URL: https://niranta.blog/experiments/peptides-101-athlete-sons/ Category: Experiments Published: May 9, 2026 Read time: 6 min Words: 1354 Tags: health, peptides, recovery, personal My two sons play football and basketball, always dealing with injuries. They got curious about peptides for recovery and healing, so I started reading up too. Here is what I learned from them and my own exploration. Disclaimer I'm not a doctor or anything like that. This is just me sharing what I've learned from my sons and some stuff I read online. I don't recommend anyone try peptides. A lot of this isn't fully approved yet and there can be risks. Talk to a real doctor first if you're thinking about it. So a little while back my two boys, 18 and 20, started telling me about peptides. One plays football, the other plays basketball. Both of them are always dealing with injuries, knees, shoulders, the usual stuff from sports. They also just want to look good and feel better. They've been reading up on this for a while and trying some things. At first I didn't really get it, but they kept explaining and I got curious. So I started looking into it myself. I even saw this big post on X the other day from some guy who watched a podcast with Dr. Alex Tatum. It went viral and had thousands of views. He broke down like 15 different peptides and what they're supposed to do. People were bookmarking it left and right and commenting on it. One person said something like "399 bookmarks, that's a lot of people about to go research where to buy these." Another joked "why so many? Can't they just make one that does everything?" It was interesting to see regular folks talking about it. 01 what even is a peptide? From what I understand (and I'm no expert), peptides are just short chains of amino acids. Amino acids are the little building blocks your body uses to make proteins. So your body takes those building blocks, links a few together into a peptide, and if it links a lot more it becomes a protein. Peptides kind of act like little messengers, they tell your body to do things like fix tissue, make collagen for your skin, or calm down swelling. When people take them, they're usually getting a lab-made version that gives your body a nudge in the direction it already wants to go. Nothing too fancy, just helpful signals. 02 why are athletes and regular people getting into this? My sons are always banged up from football and basketball, so they're interested in anything that might help them heal faster and get back on the field. From what I've seen and read, a lot of players are looking at stuff for quicker recovery and less downtime. That X post I mentioned talked about a few that keep coming up. Like BPC-157 for helping with injuries and tissue repair, it's supposed to help grow new blood vessels right where you need them. TB-500 for nagging injuries that won't go away, especially when paired with the first one. And GHK-Cu for skin, helping with fine lines or just getting that healthier look. People in the comments were asking real questions too, like whether to inject near the injury or if oral versions work the same. For regular folks it seems like collagen peptides (the kind you mix in a drink) are popular for skin and joints. Now there's more talk about blends for feeling better overall and looking good. The research side is moving too, scientists are studying these for healing and the FDA has been looking at loosening some rules lately so more might become available through regular pharmacies. 03 a few that keep coming up I'm not gonna list a ton because that gets overwhelming. Here are the main ones I've heard about from my boys and that podcast thread: Collagen peptides , pretty easy, usually a powder. Helps with skin feeling firmer and joints not aching as much. Good if you just want to feel and look a little better day to day. BPC-157 , the one a lot of athletes mention for helping tendons, muscles, and even gut stuff heal. The post said it's injected near the problem area for best results. TB-500 , pairs well with the one above for reducing swelling and helping repair tissue. Useful when you're always active like my sons. GHK-Cu , this one's for skin repair and collagen. Helps with that glow people talk about and general tissue health. It's actually one of the ones in the blend I ended up trying. My boys have been looking at combinations, so I decided to try KLOW myself. It's a mix that includes BPC-157, TB-500, GHK-Cu, and one more called KPV. It's meant to support healing and feeling better all around. 04 the not-so-fun part, side effects and real talk This stuff isn't risk-free. Since a lot of these aren't fully studied long-term in people, we don't know everything yet. Some folks mention sore spots where they inject, feeling tired, or changes in how hungry they are. Quality matters a ton too, that X thread had people warning to always check lab tests because what's on the label doesn't always match what's in the bottle. One commenter asked about using them for arthritic joints. Another wondered about oral versus injectable versions. It was good to see the mix of curiosity and caution in the replies. My sons are careful about where they get things and how they use them, and I'm doing the same. 05 how it's going for me After hearing all this from the boys and seeing that post get so much attention, I started taking KLOW a little while ago. Nothing crazy, I'm just a regular guy who was curious. So far I've been feeling pretty good. My usual aches and pains have eased up some, and my skin looks healthier with more of a glow to it. It's not like night and day, but it's noticeable in a good way. I'm taking it slow and keeping an eye on how I feel. I'm not saying this is for everyone or that it's some miracle. I'm just sharing what I've learned from my kids and what I've tried myself as a curious dad. 06 that's about it Peptides are getting a lot of buzz right now, especially with athletes wanting faster recovery and regular people wanting to feel and look better. That podcast breakdown and the thread about it made it easier for me to understand. The science is moving, more research is happening, but you still have to be smart about it. If this sounds interesting to you, do your own reading, listen to your body, and definitely talk to a doctor who knows what they're doing. My boys keep teaching me new things, and I'm glad they opened my eyes to it. It's been a fun little learning curve. Thanks for reading. Stay safe out there. --- # 10 strategies I use to slash token usage without compromising quality and reliability URL: https://niranta.blog/ai/claude-code-lessons/ Category: AI Published: May 7, 2026 Read time: 14 min Words: 2079 After months of building complex, production-grade systems with Claude Code, I've cut token consumption by 60-80% while improving output quality, reliability, and speed. After months of building complex, production-grade systems with Claude Code, I've accumulated a set of hard-won practices that have dramatically reduced my token consumption, often by 60–80%, while improving output quality, reliability, and development speed. These aren't theoretical tips; they're daily drivers that let me tackle ambitious projects without burning through credits or hitting frustrating context limits. Here are the key lessons, complete with explanations, exact configurations, and real-world examples. Ditch the 1M context window, stick to 200k The biggest mistake I see (and made myself) is defaulting to the full 1M context. Don't do it. Set these environment variables immediately: bash CLAUDE_CODE_DISABLE_1M_CONTEXT=1 CLAUDE_AUTOCOMPACT_PCT_OVERRIDE=80 Why this matters 200K is plenty. For almost every coding task, debugging, feature implementation, refactoring, 200K provides more than enough relevant history without the noise and dilution that comes with 1M. The larger window invites problems: it encourages the model to "remember" irrelevant details from hours ago, leading to context drift, repetitive suggestions, and higher hallucination rates. It also costs significantly more per request. With the autocompact override set to 80%, Claude automatically compacts around the 75–80% mark of your 200K window, keeping things running smoothly without sudden "context full" errors. The power of manual /compact Don't wait for autocompact. Proactively run /compact when your context hits 50–60%. It forces a clean summary of what actually matters right now, removes conversational dead weight (old failed attempts, tangential discussions), keeps the model's attention laser-focused on the current sub-problem, and prevents the slow degradation in reasoning quality that happens as context fills with cruft. After a successful /compact , immediately follow up with a one-sentence recap of your goal. This re-anchors the session perfectly. workflow /compact Current goal: Implement user authentication with JWT refresh tokens in the Express backend. Use /clear religiously when starting a new problem This is non-negotiable. The moment you shift to a meaningfully different problem, new feature, different bug, architecture decision, type /clear . Previous context from an unrelated task pollutes the new one. The model starts thinking in the wrong frame of reference, and you waste tokens making it unlearn old assumptions. Breaking work into small, focused problems with /clear between them is the foundation of efficient Claude Code usage. It turns one giant, expensive session into many crisp, high-signal ones. Break work into parallel phases and spawn sub-agents liberally Once you've decomposed a feature into independent pieces, don't do everything sequentially in the parent session. Spawn sub-agents for each parallel phase, one for the API layer, one for the frontend components, one for tests and validation, one for documentation. Sub-agents run with their own context windows, which massively reduces load on your main (parent) context. The parent only needs high-level coordination and final integration review. In practice, running 4–6 in parallel yields enormous token savings and speed gains. example decomposition Main task: Build real-time collaborative whiteboard - Sub-agent 1: WebSocket server + Redis pub/sub - Sub-agent 2: Canvas drawing logic + CRDT conflict resolution - Sub-agent 3: User presence and cursor synchronization - Sub-agent 4: Permission system and access control Each sub-agent gets a tight, focused prompt and a /clear at the start. Delegate aggressively with /ask [model] Don't be a hero, distribute the load across different LLMs. With custom skills you can do things like: delegation examples /ask cursor Implement the React hook for optimistic updates /ask gemini Write the SQL migration for the new audit log table /ask codex Generate the OpenAPI spec from the Express routes This spreads token usage across providers (avoiding rate limits on any one model) and plays to each model's strengths. Keeps your main Claude session clean and focused on orchestration. The more precise the delegated request, the better the result. Never change models mid-session This seems obvious but is easy to forget. Once you start a session with Sonnet (or whatever model), stay with it until the task is complete or you /clear . Model switches can reset subtle behavioral patterns, cause the new model to misinterpret accumulated context, and waste tokens on re-explaining things. Consistency wins. Vague requests are token vampires, be ruthlessly specific This is the difference between mediocre and exceptional results. Compare these two requests: Vague: "Fix the login bug" Specific: "In auth/login.ts , the handleLogin function (lines 42–67) fails silently when the user enters an email with uppercase letters. The backend expects lowercase. Add client-side normalization using .toLowerCase().trim() , show a clear error toast 'Email must be lowercase', and ensure the request still succeeds after normalization. Also add a unit test for mixed-case emails." Specificity wins because the model doesn't waste tokens guessing what you want, you get exactly the behavior you need on the first try, and fewer follow-up correction rounds means massive token savings. If your request could be interpreted multiple ways, rewrite it until there's only one possible interpretation. Install caveman to make claude talk like a caveman (65%+ token savings) This one changed everything. The Caveman plugin forces Claude (and many other agents) to respond in ultra-terse language, removing articles, filler words, pleasantries, and hedging while keeping 100% technical accuracy. Benchmarks show roughly 65% average token reduction, up to 87% on React debugging tasks, and approximately 3x faster responses with dramatically cleaner, more actionable output. install claude plugin marketplace add JuliusBrussee/caveman claude plugin install caveman@caveman # Then activate /caveman # or /caveman ultra You also get specialized commands: /caveman-commit for perfect conventional commit messages, /caveman-review for one-line PR feedback, and /caveman:compress CLAUDE.md which shrinks your rules file by roughly 46%. The transformation in practice Here's what the difference looks like on a real response about unnecessary re-renders: normal mode "I notice that you're creating a new object reference on every render in your component. This is causing unnecessary re-renders because React sees it as a new prop. You should wrap it in useMemo to stabilize the reference..." caveman mode "New object ref each render. Inline object prop = new ref = re-render. Wrap in useMemo." Same information. 80% fewer tokens. Faster. Better. Running Caveman in full mode for almost everything is worth it, the output is addictive once you get used to it. Keep sonnet effort at medium, rarely use xhigh or max For Sonnet specifically, medium effort is the sweet spot for most coding work. Higher effort modes often produce more verbose output, over-engineering, diminishing returns on quality, and higher token burn. Medium gives excellent reasoning with crisp, focused responses. Save the higher settings for truly novel architectural problems, and even then, staying on medium often produces better results. Stop pasting screenshots, describe the problem or use agent-browser This was a huge shift in the workflow. Instead of pasting a screenshot with "What's wrong here?", either describe precisely in text or use a purpose-built tool like vercel-labs/agent-browser. The text-first approach forces you to actually diagnose the problem before asking, which often means you find it yourself. When you do need AI help, a precise description beats an image every time, no visual ambiguity, no token cost for image processing, and Claude can reason about the exact state described. The agent-browser approach takes this further. It uses accessibility tree snapshots with stable element references ( @e1 , @e2 ) instead of screenshots, deterministic interaction for reliable click/type/inspect without visual guessing, and a full debugging toolkit, console logs, network requests, React component tree, styles, all queryable via text commands. This makes frontend debugging with Claude dramatically more reliable and token-efficient. example npx skills add vercel-labs/agent-browser # Then in Claude: /agent-browser chat "Navigate to the dashboard and tell me what the console errors are when I click 'Export CSV'" You get structured, machine-readable output instead of "it looks like there's a red error somewhere." Use code-review-graph for ast-powered, minimal-context reviews The code-review-graph tool builds a persistent, local knowledge graph of your entire codebase using Tree-sitter AST parsing. Instead of Claude reading 5–10 full files (thousands of tokens) for every review, it parses your code into a graph of functions, classes, imports, calls, and inheritance; tracks blast radius (exactly which functions and tests are affected by a change); and delivers minimal targeted context, roughly 100 tokens instead of 5,000+, updating incrementally in under two seconds on every file change. Real results from using this: 6.8x fewer tokens on code reviews, up to 49x fewer tokens on daily coding tasks, and 100% recall on impact analysis, it never misses affected code. setup pip install code-review-graph code-review-graph install --platform claude-code code-review-graph build Then use commands like /code-review-graph:review-pr , code-review-graph detect-changes , and semantic search across your architecture. When you ask Claude to review a PR, it no longer needs to re-read the entire monorepo. It queries the graph and gets precisely the relevant slices. Context window stays clean, costs plummet, and reviews become faster and more accurate. This is now mandatory infrastructure for any serious Claude Code project. Final thoughts: measure, iterate, and build systems that last These ten practices have transformed how I work with Claude Code. Token burn is no longer the limiting factor, it's now creativity and problem decomposition. That said, despite all this care, I still run out of tokens. How fast I get back to burning on demand has been improving, but I am not yet happy with where I am, and I will keep figuring out how to work more efficiently. The current daily ritual: start with /clear and Caveman mode; decompose and spawn sub-agents where possible; delegate aggressively with /ask ; compact proactively at 50–60%; use code-review-graph for any code inspection; describe problems or use agent-browser instead of images; stay on Sonnet at medium effort; and be obsessively specific in every prompt. The result: features that used to take 3–4 expensive sessions now complete in a single focused, low-cost session, and the code is more reliable because the context was always clean and targeted. If you're serious about scaling your Claude Code usage, start with the context settings and Caveman installation today. The ROI is immediate. What are your favorite Claude Code optimizations? Drop them in the comments, I'm always stealing good ideas. --- # My Nightly Deep Dive: From Black Holes and Hawking Radiation to Quantum Computers URL: https://niranta.blog/curiosities/from-black-holes-to-quantum-computers/ Category: Curiosities Published: May 6, 2026 Read time: 8 min Words: 1352 A months-long bedtime physics habit turned into a real attempt to understand what a quantum computer is, why Shor's algorithm matters, and whether the hype is earned. Every single night for months, I've been drifting off to the voices of Brian Cox, Michio Kaku, Leonard Susskind, and a rotating cast of brilliant explainers on YouTube, Astrum, Arvin Ash, Astro Kobi, Cleo Abram. The topics? Black holes swallowing stars, Hawking radiation leaking information back out, bosons zipping through the LHC, Einstein's relativity bending spacetime, the arrow of causality, Planck time (that mind-bending 10⁻⁴³ seconds where quantum gravity might rule), and the wild idea that the universe might be a hologram. It started as pure bedtime entertainment. Then a friend mentioned their startup is working on something quantum-adjacent, tech that could matter once these machines scale. Suddenly the late-night physics wasn't just fascinating; it felt urgent. I needed to understand what a quantum computer actually is, why it's different, and whether it's worth the hype or just another shiny distraction. This is the story of that journey, from confused listener to genuinely excited explorer. 01 The Spark: When Black Holes Meet Quantum Weirdness Susskind, one of the fathers of string theory, kept coming up in discussions about the black hole information paradox. Hawking said black holes destroy information (breaking quantum rules). Susskind and others fought back, arguing information is preserved on the event horizon, maybe encoded like a hologram. That debate pulled in entanglement, qubits, and the idea that spacetime itself might emerge from quantum information. Planck time, LHC collisions creating mini black holes in theory, bosons as force carriers, everything kept circling back to the quantum realm. Relativity and causality felt solid until you zoomed to the tiniest scales. That's when I realized: to really grok these cosmic mysteries, I probably needed to understand the machines built to harness quantum rules. Enter quantum computers. 02 What Even Is a Quantum Computer? Forget the sci-fi. A normal computer, your phone, laptop, the supercomputers running the LHC simulations, uses bits. Each bit is a tiny switch: strictly 0 or 1, like a light bulb that's either off or on. It solves problems by flipping those switches in sequences or big parallel batches. Fast for most things. Hopelessly slow for others. A quantum computer uses qubits. Here's the magic: thanks to superposition, a qubit can be 0 and 1 at the same time while it's working, think of a coin spinning in the air, heads and tails until it lands. Link multiple qubits with entanglement and they become spooky twins: change one and the other reacts instantly, even across the room or (in theory) the universe. Suddenly you're not checking possibilities one by one. You're exploring exponentially many possibilities simultaneously. Measure at the end, and the universe's probability waves interfere so the right answer pops out louder. It's not faster at everything , just at problems that explode with complexity. 03 Classical vs. Quantum, Side by Side Let's make it concrete with basic math. A classical computer adding or subtracting big numbers, say 987,654,321 + 123,456,789 , or finding optimal routes with thousands of distance calculations, does it digit by digit, carrying the 1 or borrowing like you learned in school. One path at a time. For a single sum? Blazing fast. For find the absolute best combination across millions of possible sums and subtractions , logistics, finance, molecular energies, it grinds through options or clever approximations. Reliable, predictable, but time and energy balloon when the search space gets huge. A quantum computer doing the same loads the numbers (or route options) into superposition so countless additions and subtractions happen in parallel. Entanglement correlates them. Then interference cancels wrong answers and amplifies the best one. One run can surface the optimal result where a classical machine would need years. For plain 2 + 2 ? Classical still wins on simplicity and stability. For real-world monsters like simulate every possible molecular interaction for a new drug or optimize a global supply chain with 10,000 variables ? Quantum doesn't just win, it changes the game. 04 Shor's Algorithm: The One That Keeps Security Experts Up at Night This is the algorithm that proved quantum computers aren't just theoretical toys. In 1994, Peter Shor showed how a quantum machine can factor huge numbers, break them into primes, exponentially faster than any classical method. Why care? Modern encryption (RSA protecting your bank logins, emails, government secrets) relies on the fact that factoring a 300-digit number is practically impossible classically. It would take longer than the age of the universe. Shor's algorithm turns that into a feasible task on a large enough quantum computer by finding a hidden repeating pattern (the period ) using quantum Fourier transforms, then finishing with easy classical math. Unique because: it's one of the first clear demonstrations of quantum advantage on a problem with real-world stakes. It doesn't just speed things up, it breaks assumptions we built the digital world on. That's why "Q-Day" (when big quantum machines arrive) is driving a global race for post-quantum cryptography. Shor's isn't just clever math; it's a warning shot and a roadmap. 05 Is This Actually Valuable for Humanity? Yes, when aimed at the right problems. Quantum computers won't replace your laptop. They'll partner with classical ones for the impossible bits: Designing drugs and materials by simulating quantum chemistry exactly, faster cures, better batteries, carbon capture Optimizing everything from traffic to energy grids to financial portfolios Probing fundamental physics, maybe even simulating the quantum gravity near Planck time, or black hole interiors Risks are real: security transitions, massive infrastructure needs, and access inequality. But the potential to accelerate solutions to climate, health, and energy challenges feels worth the careful steering. We're in the noisy NISQ era now (2026), with machines of dozens to hundreds of qubits improving monthly. Full fault-tolerant systems that can run Shor's on real encryption keys? Probably mid-2030s, but the curve is steepening fast. 06 Where I Am Now Those nightly sessions on black holes, Hawking radiation leaking information, bosons at the LHC, relativity warping causality, and Planck-scale weirdness didn't just entertain me, they lit the path. Quantum computers are the tool built to speak the universe's native language at its smallest, weirdest scales. Susskind's string theory work and black hole insights suddenly feel connected to the qubits humming in labs today. I started this because a friend's startup made it personal. I'm continuing because it makes the cosmos feel a little less mysterious and a lot more solvable. The universe runs on quantum rules. We're finally building machines that do too. If you've ever nodded off to physics podcasts wondering "what does any of this mean for me?", you're not alone. The journey from confused listener to someone who gets (at least basically) why qubits and Shor's matter has been one of the most rewarding rabbit holes I've fallen into. --- # Why 19pine.ai Actually Changed How I Handle Real-Life Drudgery URL: https://niranta.blog/experiments/why-19pine-ai-changed-how-i-handle-drudgery/ Category: Experiments Published: May 5, 2026 Read time: 8 min Words: 1687 I test most new AI tools. Most disappoint. 19pine.ai was different, it's the first one that genuinely shifted how I spend my days, starting with a two-hour Comcast negotiation I never had to sit through. A friend mentioned 19pine.ai a few months back. Because I'm in that phase where I'll try pretty much any new AI tool that crosses my path, I signed up immediately. This is both a strength and a liability, the graveyard of apps I've signed up for and abandoned after two days is long and undistinguished. I've been using a small evaluation framework for these things. Nothing fancy, but it keeps me honest and stops me from being wowed by a demo that doesn't survive contact with real life. Three questions: Does it actually deliver personal value, real savings in time, money, mental energy, or the constant need to remember things? How much setup friction is there before I see results? Once the task is done, does it go above and beyond, or just disappear? I wasn't expecting much. Most AI I've tried is genuinely impressive at thinking, writing, and reasoning, but not at doing the annoying real-world tasks I actually dread. The gap between "AI that can analyze anything" and "AI that will stay on hold with Comcast for two hours" is enormous, and almost nothing bridges it. 19pine.ai turned out to be different. It's not perfect. But it's the first tool I've used that has genuinely shifted how I spend my days, not by making me a faster thinker, but by taking the tasks I hate off my plate entirely. 01 the comcast call that made me a believer My internet bill had crept back up to $152 a month. Comcast has this habit of quietly raising prices every few months, which forces you into the same exhausting negotiation cycle every time. I've done it for years: hold music, scripted reps, haggling like I'm at a flea market. It always takes an hour I don't have, and it always leaves me slightly drained even when it works. The small win never quite feels worth the friction. This time I tried something new. I opened the 19pine app and typed: "Negotiate my Comcast internet bill down to around $70/month." That was it. No uploading bills. No filling out forms. No selecting account type from a dropdown. A few minutes later the app asked a couple of clarifying questions. Then it said it was going to call Comcast. I half-laughed, half-doubted it. My phone rang. Three-way call. An AI voice, surprisingly natural, calm, no uncanny valley weirdness, asked me to confirm authorization. I said yes, hung up, and went back to work. For the next two hours it stayed on the line with the rep. Pushing. Checking promotions. Negotiating. The app sent me short updates so I wasn't left wondering if it had gone silent. When it finally came back to me, the new price was $74 a month. $78 saved every month. Roughly $900 a year. I didn't talk to a single human. I didn't sit on hold. I didn't have to remember to do it again in six months, because after the win, the AI asked if I wanted it to check for better deals automatically on a recurring basis. I said yes. It set up permanent authorization with another quick three-way call, and now I just get periodic updates in the app telling me what it handled. No more mental note. No more dread. No more negotiation dance. That single experience sold me on the concept more than any product demo ever could. There's a category of task, tedious, recurring, emotionally draining, but not actually difficult, where I've always known I should handle it but kept putting it off because the cost of doing it felt higher than the cost of ignoring it. 19pine rebalances that equation completely. 02 turning other annoying tasks over to it Once I saw it could actually act in the real world, I started giving it everything I usually procrastinate on: Changing flight dates and pushing for better seats Navigating a stubborn return with a retailer that didn't want to cooperate Upgrading my parents' flights as a surprise gift Calling around for quotes to repair the wicker on my outdoor sofa Each time the pattern was the same. I describe what I want, answer a quick question or two if needed, and it goes to work, calling, waiting, negotiating, updating me. Tasks that used to sit on my to-do list for weeks, or that I'd quietly pay full price to avoid, now get handled in the background while I do something that actually requires my attention. The wicker repair one is worth mentioning specifically. That's the kind of task where the friction isn't even the call itself, it's the five minutes of mental overhead every time it surfaces on the list before you push it down again. Having it just disappear from my mental stack was quietly significant. I didn't realize how much that low-grade procrastination costs until it stopped. 03 this isn't about taking jobs We keep hearing about AI taking jobs. That framing misses what's actually happening for people like me. I'm not replacing a personal assistant I never had. I'm not outsourcing work I used to give to someone else. These are tasks I would have done myself, badly, late, while grumbling, or simply avoided until they cost me money anyway. What 19pine does, and what the wave of AI agents starting to appear are beginning to do, is make me dramatically more efficient at the parts of life that aren't work and aren't rest, the administrative middle layer that eats hours without producing anything you'd point to at the end of the day. It frees up mental bandwidth for the things that actually matter: work I care about, time with family, actual rest. The difference it creates is not between "I have help" and "I don't." It's between "I'll deal with this later" and "it's already handled." Those two states are much further apart than they sound. We're still early. The market is full of AI tools that chat, summarize, and generate ideas. The ones that act, that pick up the phone, stay on hold, and negotiate on your behalf, are still rare. They feel like the real shift. They turn AI from a clever toy into something that has a measurable effect on how you spend your finite hours. 04 my honest score Personal value Huge wins on time, money, and mental load. The recurring auto-renegotiation alone makes it worth it. 9 / 10 Setup friction Almost zero. You tell it what you want. Occasionally needs a bit more context, but nothing painful. 9 / 10 Above and beyond Keeps helping after the task is done. Still some moments where it hits an edge case that needs a nudge. 8 / 10 Overall 8.7 / 10 It's not flawless. Sometimes it needs more guidance than I'd like, and not every task goes perfectly on the first try. The edge cases are real. But for what it already delivers in daily life, it's easily the most useful AI tool I've adopted, and I've adopted a lot of them. I've told friends and family about it. Some think I'm exaggerating or that it sounds impossible. Some say they'll try it and probably won't. A few have signed up and started sharing their own small wins back with me. I get why it feels unbelievable at first, we've all been burned by overhyped tech that demos beautifully and disappoints in practice. For me, though, it's become a simple habit: before I spend two hours on hold, or chasing down a quote, or negotiating anything I'd rather not negotiate, I now ask whether 19pine can handle it. More often than not, the answer is yes. And that small shift has already paid for itself many times over. If you've got a recurring annoyance that eats your time or money, try throwing it at 19pine.ai . Worst case, you learn something. Best case, you never have to deal with it again. What's one task you wish you could hand off right now? I'm curious, maybe I'll test it myself and report back. --- # My Personal AI Evaluation Framework: How I Size Up Every New Tool I Come Across URL: https://niranta.blog/ai/my-personal-ai-evaluation-framework/ Category: AI Published: May 3, 2026 Read time: 7 min Words: 1993 I've tested more AI products than I care to admit. Most disappoint. Here is the brutally practical eight-dimension framework I use to decide whether any tool is actually worth my time and money. I've tested more AI products than I care to admit. Most of them are flashy demos that feel magical for five minutes and then quietly disappear from my life. They don't save me real time. They don't cut my actual expenses. And they sure as hell don't reduce the mental load of remembering yet another damn thing. The AI hype cycle has a specific shape: someone posts a demo, the internet erupts, you sign up, it impresses you once, and then it just... sits there. You keep paying the subscription. You keep meaning to use it. You never do. This has happened to me more than I am comfortable admitting. So a while back I stopped letting first impressions drive the decision. I built a framework. It started with three questions I still ask before anything else: Does it actually deliver personal value, real savings in time, money, mental energy, or the constant need to remember things? How much setup friction is there before I see results? Once the task is done, does it go above and beyond, or just disappear? Those three keep me honest. But after running dozens of tools through them, I found they were necessary but not sufficient. There were tools that passed all three and still washed out after a month. And there were tools that I initially dismissed that ended up becoming genuinely important to how I work. The three questions captured the immediate hit. They didn't capture durability, integration, or economics over time. So the framework grew. It now has eight dimensions. The three original questions are still in there, they're just embedded inside a larger structure that captures the full picture of whether something is worth my attention and my wallet. 01 the eight dimensions Here is each dimension, what it measures, and why I care about it. Dimension 1 Personal Value Delivery I only care if it creates measurable wins in time, money, energy, or cognitive load. If I can't point to a concrete "this saved me X minutes, dollars, or headaches this week," it's a no. Vague feelings of productivity don't count. I want numbers or I want the specific anxiety that no longer shows up. Dimension 2 Adoption Friction & Time-to-First-Value Life is short. If I'm watching tutorials, migrating data, setting up integrations, or waiting days for meaningful results, I'm out. The standard I use: copy-paste simple, or "tell it what to do" simple. If I have to become an expert in the tool before the tool helps me, that's a product design failure, not a user failure. Dimension 3 Post-Task Experience & Delight Does it dump output and ghost me, or does it follow up, offer next steps, remember context, and make me feel like I have a competent assistant who's still thinking about the problem? The tools that stick are the ones that feel like a relationship, not a transaction. One and done is for vending machines, not assistants. Dimension 4 Reliability & Trust Hallucinations, random quality swings, or "you should double-check this" kill it for me. I need consistency I can bet real tasks and real money on. The moment I have to babysit the output, I've lost the leverage I was looking for. Trust is earned task by task, and it takes a lot of consistent wins before I fully delegate something. Dimension 5 Speed, Responsiveness & Flow If it slows me down or forces me into its rhythm, forget it. Great tools feel like extensions of my brain. Bad tools feel like a separate app I have to open, context-switch into, and manage. The best AI experiences I've had are the ones where I forgot I was using a tool at all. That's the bar. Dimension 6 Integration & Ecosystem Fit The less context-switching the better. If it plugs into the tools I already live in, email, calendar, notes, browser, it wins. If it creates new tabs, new logins, new mental overhead, it loses. This dimension is about whether the tool respects the life I already have or demands I rebuild around it. Dimension 7 Cost Transparency & Real ROI I hate surprise usage caps and hidden fees. Every tool has a honeymoon period where the novelty justifies the cost. The real question is what happens after month three: does the value still justify the price, or the opportunity cost of my attention? Success-based pricing is the model I trust most. Pay when it wins. Don't pay when it doesn't. Dimension 8 Long-Term Viability & Habit Formation Does it get smarter the more I use it? Does it create a habit loop I actually want, or just another thing I forget to open? The acid test: if the company vanished tomorrow, would I actually miss it? If the answer is no, it was a toy. If the answer is yes, and especially if the answer is "I'd have to rebuild a workflow", that's a tool that's genuinely embedded in how I function. These eight aren't arbitrary. They all come back to the same core question: does this tool make my personal life better without trading one form of friction for another? I've walked away from plenty of "impressive" products because they failed two or three of these. Benchmark scores and demo videos are irrelevant. What matters is whether it holds up on Tuesday afternoon when I have a real problem and limited patience. 02 the framework in action: running Pine AI through it A friend told me about 19pine.ai a couple of weeks ago, an autonomous agent that actually makes phone calls, negotiates bills, cancels subscriptions, chases refunds, and handles the kind of customer-service hell I've always hated. I signed up and immediately tested it on my Comcast bill, which had crept up to $152 a month. I gave it the details and stepped away. It jumped on a call, negotiated the rate down to $74 a month, and set up automatic future checks so the rate wouldn't creep back up. Same story on flight changes, return negotiations, and getting quotes on repairs I kept putting off. Every time it made the calls, sent me updates, and delivered the result. I never touched a phone. Here is how it scored across all eight dimensions, on a 1–10 scale: Personal Value Delivery Saved real money ($78/month on internet alone) + hours of phone dread. Quantifiable every time. 9 / 10 Adoption Friction & Time-to-First-Value Signed up, described the task, done. No videos, no data migration, no waiting. 9 / 10 Post-Task Experience & Delight Gave me summaries, next-step options, and set up ongoing monitoring. Felt like a real assistant. 8 / 10 Reliability & Trust Consistent across multiple real-world calls. No hallucinations on the tasks it handled. Need one more month before I'd give it a full 10. 8 / 10 Speed, Responsiveness & Flow Works asynchronously, calls when it can, which is actually better than sitting on hold. Follow-ups were quick and natural. 8 / 10 Integration & Ecosystem Fit Works via their app, updates via text and email. No deep calendar or email integration yet, but it didn't add friction either. 7 / 10 Cost Transparency & Real ROI Success-based pricing. I only pay when it wins. After the first bill negotiation it more than paid for itself. 9 / 10 Long-Term Viability & Habit Formation Already learning my style across tasks. I'd miss it if it disappeared, that's the real test. 8 / 10 Overall 8.3 / 10 Pine isn't perfect, the integration story is still early, and I want more reps before I trust it with anything higher-stakes. But 8.3 across all eight dimensions is genuinely rare. Most tools I run through this framework land in the 5–6 range. A few make it to 7. Getting to 8.3 means almost every bar I actually care about got cleared. I wrote more about how Pine specifically changed my habits in the companion post . 03 steal this framework The whole point of building a framework like this is that you stop getting fooled by demos. The AI tool market is moving so fast that hype is the default signal, everyone is announcing, very few things are genuinely useful at the level of your actual life. Eight dimensions sounds like a lot. In practice it takes maybe ten minutes of honest reflection after using a new tool for a week. If you can't score it, you haven't used it long enough to know. If you can score it and it's landing below 6 on most dimensions, the demo fooled you. What I keep coming back to is this: the tools that actually stick aren't the ones with the best marketing or the most impressive capability demos. They're the ones that remove a specific pain so completely that you stop thinking about it. Not "this is impressive." Not "I should use this more." Just, the problem is gone. That's the standard. Eight dimensions to get there. --- # Why Some Friendships Leave Me Tired URL: https://niranta.blog/personal/why-some-friendships-leave-me-tired/ Category: Personal Published: May 2, 2026 Read time: 5 min Words: 843 Tags: personal, friendships, energy Some friendships leave you tired. The imbalance is quiet but constant, you give, they take, and over time you start to dread the messages. I have met many people in my life. It is very obvious in certain interactions why folks want to be in touch with you. Some reach out knowing they need your help. It could be money or anything material. This feels like the majority. Others want to build a real personal relation. They get to know you and they are also there when help is needed. The first type is common. There is nothing wrong with it. People have needs and sometimes you can help. But it sucks up too much energy. You know what they are doing. Still you smile and go along. There is no point in hurting them. 01 the drain Energy draining friendships are the ones that leave you tired after every talk or meeting. You feel empty inside. It is not always one big thing. It is the constant one sided flow. They share their problems and needs. You listen and try to support. When you have something going on they change the subject or go quiet. You end up giving advice or help but get very little back. Over time it adds up and you feel used even if they do not mean it that way. What makes them draining is the imbalance. You invest time and care. They take what they need and move on until the next time. You battle with yourself because you want to be a good person. You do not want to judge or stop helping. Yet you start to dread their messages. You wonder why you keep going along with it. Life already has enough stress from work and other duties. These friendships take more than they give and leave you with less energy for the people who actually matter. 02 the psychology of it I have wondered about the psychology of these one sided friendships. It feels like some people get used to taking without giving much back. They might not even notice because their focus stays on what they need right now. On my side I keep showing up because I want to be kind and helpful. It is hard to break that pattern even when I know it is not equal. Inside my own head I start to feel the weight of it. Part of me questions why I let it go on. Another part hopes it might change someday. That inner back and forth is what wears me out the most. I have seen this pattern again and again. You stay nice because that is how you are. You do not want to be rude or create bad feelings. Yet inside it leaves you drained. The connection does not feel equal. One side keeps taking. The other side keeps giving without getting much in return. There are other types too. Some people come around only for fun times or shared activities. It is light and okay for a while but it fades when the interest changes. Then there are the rare ones where both sides care for real. Those feel different. They give you energy instead of taking it away. 03 how I push back In the end I still overdo it at times. I help more than I should and later I wonder why I wasted my time and energy. I battle this constantly. It is not that I do not want to help or that I expect something back every time. It just feels like too much work. I am working on figuring out how to push back gently without hurting them. Here are some ways I try. When someone asks for a big favor I say I am tied up with my own things right now and cannot take it on. For smaller requests I reply with something short like I can only do this much or I suggest another way they can get help. I do not always reply right away. Sometimes I wait a day or two before responding. I also stopped reaching out first as much. If they only message when they need something I keep my answers polite but brief. I do not offer extra help or time unless I really want to. It is not easy but these small changes help me protect my energy. I still stay kind. Most people do not even notice the shift. Things often sort out on their own. This is just how I see it from my experiences. Maybe some of you feel the same. Being aware of these patterns helps you move through life with less frustration. --- # The 5-Gate Rule: Never Ship AI Code Without Adversarial Review URL: https://niranta.blog/ai/the-5-gate-rule/ Category: AI Published: May 1, 2026 Read time: 10 min Words: 2155 AI coding assistants are fast. That's the problem. Speed without verification is how you ship shell injection, database corruption, and backwards alerts, all in the same session. AI coding assistants are fast. That's the problem. Speed without verification is how you ship a Terraform plan with shell injection, database corruption on every deploy, and a monitoring alert that fires when nothing is broken. All in the same document. All from the same session. The AI didn't warn you. It felt productive right up until it wasn't. That happened to me. What caught it wasn't a single code review. It was five of them, each looking at the code from a completely different angle. 01 The Infra Plan That Almost Destroyed Everything I was building a blue/green deployment system on GCP. FastAPI backend in Docker, SQLite state machine, nginx upstream swap between blue and green. I wrote a detailed plan. Thorough, well-reasoned, the kind of document you feel good about. Then I sent it to an adversarial reviewer. The verdict: DO NOT SHIP. Eight critical issues. Not "some concerns." Not "minor nits." Eight things that would have caused root compromise, data corruption, or a deployment that never starts. Here is a sample of what was found. C2, SQLite corruption on every deploy The state Docker volume was mounted by both blue and green containers simultaneously during the 30-second health check window. Two concurrent SQLite writers, no locking coordination. Database corruption on every single deploy cycle. The plan didn't mention it. A standard code review wouldn't have caught it, it's a runtime interaction, not a code smell. You only see it when two processes race for the same file at the wrong moment. C3, Shell injection via Secret Manager The deploy script loaded the environment with: set -a; source service.env; set +a source executes arbitrary bash. The env file content came from Secret Manager. If an attacker gets write access to Secret Manager, or if someone accidentally puts a bash expression in a value, that's container root. The plan had explicitly argued against storing credentials in Terraform state for security reasons, then introduced a worse vulnerability two sections later. The model that wrote it didn't notice the contradiction. C6, Containers couldn't read their own secrets The env file was mounted chmod 600. The containers ran as a non-deploy UID. First deploy, health check fails, nothing starts, no useful error message. The kind of bug you spend an hour staring at before you realize the problem is a permission bit, not the code. H18, Alert fires when nothing is broken COMPARISON_LT threshold=1 on REDUCE_COUNT_FALSE . This fires when zero regions are failing. The alert was configured backwards. An on-call engineer would have been paged whenever the system was healthy and left in silence when it wasn't. A standard code review found none of this. The adversarial review found all of it, because it assumed everything was broken and worked to prove it, rather than assuming correctness and looking for exceptions. That posture change is everything. 02 Why One Review Isn't Enough The deeper issue is that a single reviewer, human or AI, brings a single posture. They read with a particular set of assumptions, a particular mental model, a particular definition of "what could go wrong." That mental model has blind spots. One reviewer's blind spots are not the same as another's, which is precisely why you need more than one. AI makes this worse in a specific way: the model that wrote your code built the same mental model you did while writing it. When you ask it to review its own output, it reviews it from inside the same frame. It tends to approve what it generated because it generated it with an intent that still makes sense to it. The blind spots are shared. You need reviewers that weren't in the room when the code was written. My workflow runs five review gates before anything significant ships. They are not redundant. Each one catches a different failure mode, asks a different question, and approaches the code with a different posture. 03 The Five Gates Gate 1 /plan-eng-review Is this the right design? Engineering correctness before implementation. Schema design, missing edge cases, architecture decisions, API contracts. This gate catches "you've designed yourself into a corner" before you've written a line of code. A recent design doc for a multi-product pricing feature scored 6/10 here, with a blocking issue: the entire calculation engine was speculative because the actual pricing numbers hadn't been obtained yet. Three days of implementation work, avoided. The cost to fix a design flaw after the code exists is an order of magnitude higher than catching it in a doc review. Gate 2 /codex Does this do what it claims? An independent AI perspective on the diff, not the same model that wrote the code reviewing it. Codex tends to catch logical gaps, overcomplexity, and "this doesn't do what you think it does" bugs. Running it on freshly written code surfaces disagreements between what the implementation does and what the spec said it should do. The disagreements are usually small. Small disagreements left unresolved become large incidents. Gate 3 Greptile How does this change ripple through the codebase? Greptile reads the entire codebase, not just the diff. It catches cross-file regressions, surfaces callers of changed functions that weren't updated, finds invariants that hold in one file but break in another, and flags when a change quietly invalidates something documented in a completely unrelated part of the repo. The model that wrote the change doesn't know the codebase exists beyond the context window. Greptile does. On a large PR, this gate regularly finds two or three things that no amount of diff-level review would surface, because the problem isn't in the diff, it's in the gap between the diff and everything else. Gate 4 /review Is the code good? The standard code quality pass. SQL safety, security at the obvious level, test coverage, style consistency. This is the review that most teams run and call sufficient. It catches what standard reviews catch. It is necessary but not sufficient, which is the entire point. The other four gates exist because this one, however good, cannot catch runtime interactions, cross-file regressions, design contradictions, or adversarial failure paths. Gate 5 Adversarial review How does this fail, and what does failure cost? Maximum skepticism. Assumes everything is wrong until proven otherwise. Hunts for race conditions, data corruption paths, security exploits, silent failures, and logic errors that look correct in isolation but break under real conditions. This is where the shell injection gets found. This is where the concurrent writer problem surfaces. This is where the backwards alert comparator gets caught. It runs last on purpose, it is expensive, it requires reading the full implementation with hostile intent, and you don't run it on code that hasn't already survived the first four gates. It is the final filter, not the first one. 04 The Pattern In Practice On a typical session, this is what shipping looks like in my prompts: "commit and create PR, but first /plan-eng-review, /codex, Greptile, and /review" Not optional. Not "if we have time." Before the PR exists. On larger infrastructure changes, the adversarial review joins: "/plan-eng-review, /codex, Greptile, /review, and adversarial review of the full phase" All five, on the same diff, before it touches a branch. The sequence matters. Design review first, there is no point finding code-level bugs in a design that's wrong. Codex and Greptile next, independent perspectives, different blind spots. Standard review after, quality and style. Adversarial last, the final interrogation, performed on code that has already survived four rounds of scrutiny. 05 What This Costs vs. What It Saves Running five review gates on every significant change adds roughly 10 to 20 minutes per session, depending on the size of the diff. The alternative is the infra plan: eight critical issues including one that hands container root to anyone with Secret Manager write access, one that corrupts the database on every deploy, one that prevents anything from starting on first run, and one that pages on-call when the system is working correctly. The math is not complicated. Find the shell injection in a review, or find it at 2am when something unexpected gets injected into your deploy environment. Find the concurrent writer problem in a review, or find it after three weeks of mysterious database corruption that you can't reproduce in dev because your dev environment only runs one container at a time. Every gate has a price. Every uncaught bug has a much larger one. 06 The Spec Review Variant The gates don't only apply to code. Running adversarial review on a design document before implementation recently caught 14 issues in a spec for a new pricing feature. Among them: A direct contradiction between two sections about what ships in Phase 1. Volume discount mechanics described but not specified, step discount or blended rate? TypeScript types referenced throughout but never defined anywhere. An effort estimate of three to four days that didn't account for the possibility that the actual pricing formula was structurally different from what the spec assumed. None of those would have been caught by a code review. They would have been found during implementation, when fixing them means rewriting code that already exists, not amending a document that doesn't yet. The cost of a contradiction in a spec is one conversation. The cost of the same contradiction discovered mid-implementation is days. 07 The Rule Never ship AI-generated code through a single review pass. The model that wrote the code will often approve the code. It built the same mental model you did during the session. It shares your blind spots because it was present when you formed them. You need reviewers with different postures: one that checks whether the design is right, one that checks whether the implementation matches the intent, one that checks how the change lands across the whole codebase, one that checks code quality, and one that assumes failure and tries to prove it. Five gates feels like overhead. It feels that way right up until the adversarial reviewer finds the thing that would have taken down production at the worst possible moment. Then it feels like it paid for itself for the next six months, and you run all five gates from that point forward without needing to be reminded why. --- # How I Dramatically Improve My Existing Applications Using Claude Code, gstack, and Superpowers URL: https://niranta.blog/ai/improving-existing-apps-with-claude-code-gstack-superpowers/ Category: AI Published: April 30, 2026 Read time: 11 min Words: 2008 Most AI coding workflows assume you're starting from scratch. Here is the exact system I use for production code I already have. I've worked on production systems for years. Most of the time I'm not starting from zero, I'm refactoring legacy backend code, fixing performance bottlenecks, or modernizing a UI that's starting to feel dated. That's a fundamentally different problem than greenfield development, and it demands a different approach. Raw Claude Code is powerful, but pointed at an existing codebase without guardrails it can introduce subtle bugs or make inconsistent changes that compound in ways that are hard to untangle later. The model doesn't know your implicit conventions, your undocumented architectural decisions, or the landmines left by that sprint three years ago. That's why I've settled on a combination of two tools that sit on top of Claude Code: gstack , Garry Tan's virtual engineering team of specialist roles, and Superpowers , Jesse Vincent's structured seven-phase agentic workflow. They complement each other without overlap. gstack handles the "thinking" layer: expert audits, adversarial challenges, cross-model second opinions. Superpowers handles the "doing" layer: TDD discipline, git worktrees, and phased execution that prevents sloppy refactors on code that's already in production. I keep everything coordinated through a single TODO.md file that acts as my living, prioritized backlog. No fancy project setup. Claude Code is pointed at my repo root, CLAUDE.md is in place, and the session starts from there. Here is exactly how the flow works in practice. 01 understand and prioritize backend issues Every improvement cycle starts with targeted discovery using gstack's specialist commands. These replace vague brainstorming with actionable analysis rooted in the actual code. The first command is /plan-eng-review , a full engineering review that reads the codebase and surfaces performance bottlenecks, scalability problems, tech debt, and unreliable patterns. The prompt I give it is direct: Engineering review prompt "This is an existing production backend. Run a full engineering review. Identify performance bottlenecks, scalability issues, tech debt, and unreliable patterns. Read the codebase and update TODO.md under ## Backend Issues with P0/P1/P2 priorities, severity, reproduction steps, and suggested fix categories." From there I run /cso , gstack's Chief Security Officer role, for a security and compliance pass using OWASP and STRIDE . The output goes into the same TODO.md , appended as prioritized findings. Then I add a third voice with /codex , gstack's independent OpenAI Codex CLI reviewer running in adversarial mode, its job is to try to break the assumptions the earlier analysis made, challenge edge cases, and add anything the first pass missed. Adversarial review prompt "Run adversarial review mode on the current backend findings. Try to break assumptions, challenge edge cases, and add any unique risks to TODO.md. Compare with gstack's earlier analysis for cross-model insights." The combination of three distinct perspectives, engineering manager, security officer, independent adversary, surfaces things that any single pass would miss. Once the list is in TODO.md , I optionally run /plan-ceo-review if I need business-impact ranking layered on top of the technical priorities. 02 audit and improve UX and UI With the backend issues catalogued, I switch focus to the frontend. The same specialist approach applies. /design-review or /plan-design-review runs a structured review against the current UI for accessibility, visual hierarchy, mobile responsiveness, and patterns that have aged poorly. The key constraint in the prompt is respecting the existing design system, I don't want a blank-slate redesign, I want prioritized gaps with clear before/after descriptions. Design review prompt "Review the current UI/UX for accessibility, visual hierarchy, mobile responsiveness, and outdated patterns. Respect my existing design system. Summarize gaps and add prioritized items to TODO.md under ## UX Improvements with clear before/after descriptions." After that, /design-consultation or /design-shotgun brings in modern, production-grade suggestions at the component level. These go into TODO.md as specific, actionable changes, not aspirational notes. At this point TODO.md has two clean sections: backend issues and UX improvements, both fully prioritized. That's when execution begins. 03 safe execution, one issue at a time This is where Superpowers takes over, while gstack stays in the loop as a quality gate. The handoff feels natural: gstack audits and identifies what needs doing; Superpowers figures out how to do it safely without touching anything it shouldn't. For each P0 or high-priority item in TODO.md , the cycle is four moves. First, structured planning. I run /brainstorm followed by /write-plan with a prompt that focuses Superpowers on minimal safe changes to the existing codebase, not a rewrite, not an opportunity to clean up everything adjacent. Just the target issue, planned in TDD style. I always review the plan before approving it. That review step is not optional; it's where I catch misunderstandings before they become commits. Planning prompt "Based on the top item in TODO.md, brainstorm options then write a detailed TDD-style plan. Focus on minimal safe changes to the existing codebase." Second, isolated execution. Superpowers automatically spins up a git worktree so my main branch stays completely untouched. It runs the full TDD loop, red, green, refactor, and keeps changes atomic. The worktree isolation is the thing I trust most about this setup. Sloppy agentic changes on main are how you get a bad afternoon. Third, quality gates. Before anything merges, it runs through: a specialist check via /plan-eng-review or /design-review depending on what changed; a standard code review via /review ; adversarial mode via /codex specifically targeting the new changes; and a test or browser check via /qa . The adversarial pass on completed changes is the gate I've found most valuable, it's much easier for the model to attack a specific diff than to exhaustively audit a full codebase. Fourth, close the loop. When Superpowers finishes, the session ends with a prompt to mark the task complete in TODO.md , note any new issues discovered during the fix, and run /retro so Claude learns my preferences over time. The retro isn't ceremonial, it's how the system gets better at my codebase specifically. 04 the weekly rhythm that actually sticks The workflow only works if it has a cadence. What I've found that holds is this: one discovery day, two fix days, hard stops at the end of each. Discovery day is a quick pass with /plan-eng-review , /cso , /design-review , and /codex to refresh TODO.md . This takes less than an hour. The whole point is to walk into fix days with a prioritized list rather than making judgment calls in the moment about what to tackle. Fix days cover a maximum of one to two issues, typically one backend item and one UX item. Never more. The limit is not about time; it's about quality. Every additional issue in a session adds coordination overhead and increases the chance that the quality gates get rushed. Two well-executed fixes with clean tests and a proper retro are worth more than five half-finished ones. Before any merge or deploy: /qa and /ship , gstack's release checklist. It's become automatic. 05 when to skip pieces of this Not every change needs the full stack. There are two patterns I reach for when the full workflow is more than the problem warrants. For small, low-risk changes, a single function, a CSS adjustment, a config tweak, I skip Superpowers and just use gstack's /autoplan plus /review plus /codex . The adversarial review still happens because even small changes in production code can have unexpected surface area, but I don't need a git worktree and a TDD cycle for a three-line fix. For sessions that are purely backend refactoring with no UI changes, I lean heavier on Superpowers' built-in planning without pulling in as many gstack roles. The specialist perspectives add the most value at the audit and review stages; for pure execution on a well-understood problem, Superpowers' own structure is enough. The rule I would give anyone thinking about this: do not add the full workflow prematurely. For early-stage or exploratory code, one model and one env var is the right answer. Add the layers when you can name the specific failure mode you're trying to prevent, and when you're working on code that already has users depending on it. 06 why this combination holds together The reason gstack and Superpowers work together without friction is that they were designed for different problems. gstack's roles, engineering manager, designer, CSO, reviewer, Codex, exist to provide expert perspective and adversarial challenge. They're good at the things that one model misses when it's both the author and the reviewer of its own work. Superpowers exists to enforce process discipline on execution: TDD, git safety, phased delivery. These are orthogonal concerns. The tools don't compete; they hand off cleanly. What the system actually delivers is reliable forward progress on production code without regression surprises. Backend performance improvements that hold under load. UX changes that users notice for the right reasons. And none of what I've started calling "AI slop", plausible-looking changes that pass a surface read but introduce subtle incorrectness at the edges. That last thing is the hardest to defend against with a single-model, single-pass approach. The adversarial layers exist precisely because the model that generates the change shares the same blind spots as the model that reviews it. Different models, different perspectives, structured quality gates, that's the actual defense. If you're working on a real production codebase, the place to start is /plan-eng-review and /design-review in your next session. See what surfaces. Drop your stack or a specific pain point in the comments, I'm happy to share the exact prompt sequence I would run for it. --- # Splitting AI Models by Architectural Layer: When One Model Isn't Enough URL: https://niranta.blog/experiments/splitting-ai-models-by-architectural-layer/ Category: Experiments Published: April 29, 2026 Read time: 9 min Words: 2000 Most teams pick one model and hardcode it everywhere. We did too, until we didn't. Here is how splitting by architectural layer changed our cost profile, latency, and reasoning quality. When I started experimenting on a fun project, I did what most people do: picked one model, hardcoded it everywhere, and shipped it. AsyncAnthropic() at every call site. One MODEL_NAME env var. Done. It worked, until it didn't. The failure wasn't dramatic. There was no production incident, no page at 3am. It was subtler: a slow accumulation of wrongness. Cost reports that didn't make sense. Latency numbers that were acceptable but not great. The creeping feeling that we were paying Sonnet prices for jobs that didn't need Sonnet quality, and Sonnet speed for jobs where speed didn't matter at all. The problem wasn't capability. Modern frontier models are genuinely remarkable across a wide range of tasks. The problem is that using one model for fundamentally different workloads is like using a Formula 1 car for your grocery run. Technically possible. Expensive. And completely wrong for the job. Here is how we split them, and why it mattered. 01 the two jobs The cleaner I got about what was actually happening in our stack, the more obvious the split became. We had two distinct LLM-consuming layers with almost perfectly opposite requirements. The first is the user-facing chat path, what we call the BFF, Backend for Frontend. A user types something, expects a response in under a second, and needs rich instruction-following: structured tool calls, SSE streaming, nuanced output that doesn't feel robotic. Latency is visible here. Quality is visible here. Users will tell you when either is off. This layer runs Claude Sonnet 4.6. The second is the background processing pipeline, analysis jobs, draft generation, agent orchestration. These jobs land in a queue, run in seconds or minutes, and return JSON. No one is watching a spinner. No one is waiting. What you want is throughput, reasoning depth, and cost efficiency. This layer runs Grok fast reasoning. The insight, once you see it, is hard to unsee: user-facing completions are latency-sensitive and quality-visible. Batch completions are throughput-sensitive and cost-visible. They optimize for entirely different dimensions. Pretending they are the same problem, routing them to the same model with the same billing rate, costs you money, latency, or both. Usually both. 02 what we found in the codebase Before writing a single line of new code, we audited what already existed. Codebases have a way of encoding institutional knowledge that no one ever wrote down, and this one was no different. The agents layer already had provider routing via a _make_llm_client() function. The logic was dead simple: llm_agent.py def _make_llm_client(model: str): if model.startswith("claude-"): return ChatAnthropic(model=model) return ChatXAI(model=model) Model name prefix determines the provider. No registry, no config file, no abstraction layer. Just a string check. The agents layer had been doing this for weeks, quietly, without fanfare. Someone on the team had made a pragmatic decision and moved on. The repl module had ChatXAI instantiated directly. Two call sites, two different patterns, evidence that the codebase was already converging on xAI for backend work, just inconsistently. No one had connected the dots and made it deliberate. The gap was in the analysis and drafts routers: both directly instantiated AsyncAnthropic() with no abstraction. Switching providers required code changes. The kind of code change that feels routine until you do it six times and realize you need a different approach. What the audit told us was not that the codebase was broken, it was that it was converging on the right answer through organic pressure. Our job was to make that convergence explicit and durable. 03 the implementation We built two modules: llm_policy.py and llm_client.py . One decides which model to use. The other routes the completion to the right provider. Clean separation of concerns. llm_policy.py resolves which model to use for a given call site: llm_policy.py def resolve_model(env_key: str, logger: Logger) -> str: model = ( os.getenv(env_key) or os.getenv("MODEL_NAME") or "claude-haiku-4-5-20251001" ) if RESTRICT_MODELS and model in _DENIED_MODELS: logger.warning( f"Model {model} blocked by RESTRICT_MODELS (env_key={env_key})" ) model = "claude-haiku-4-5-20251001" return model Two named env vars drive the split: CGX_FRONTEND_MODEL_NAME for the BFF chat path, and PROCESSING_BACKEND_MODEL_NAME for analysis, drafts, and the agents orchestration layer. MODEL_NAME survives as a legacy fallback during migration, no need to break anything while you transition. Haiku is the cost-safe default if nothing is set. The RESTRICT_MODELS=true flag (on by default) blocks the entire Sonnet and Opus family in dev and staging environments. This is the guardrail I am most proud of. Dev environments accumulate expensive habits. You run a test, it hits a frontier model, the cost is invisible in the moment, and then you wonder why the bill is weird at the end of the month. Blocking by default makes the expensive choice deliberate. llm_client.py routes completions to the right provider: llm_client.py async def simple_complete( model: str, system: str, user_prompt: str, max_tokens: int, logger: Logger ) -> str: if model.startswith("claude-"): return await _anthropic_complete( model, system, user_prompt, max_tokens ) return await _xai_complete( model, system, user_prompt, max_tokens, logger ) Same prefix heuristic as the agents layer, now applied uniformly. AsyncAnthropic for Claude models, ChatXAI via LangChain for everything else. xAI imports are deferred inside _xai_complete() so there is no import cost on the Anthropic path, a small thing, but the kind of small thing that matters in a long-running service. Error handling is standardized across both paths: missing API key returns a 503, upstream failure returns a 502. Routers stay thin. The distinction between "we can't reach the provider" and "the provider rejected the request" is meaningful and now consistent. One explicit carve-out: chat streaming is excluded from simple_complete . SSE keepalives and tool-use interleaving in streaming mode are complex enough to warrant their own path. chat/core.py stays Anthropic-native for now. This is intentional, not a gap. Streaming is a different contract with a different failure surface, and collapsing it into a general-purpose completion function would create more problems than it solves. 04 the cost tracking wrinkle Multiple providers means multiple rate cards, and that creates a tracking problem that is easy to underestimate. When everything runs on one provider, cost attribution is straightforward. When you add a second, the math gets slippery, especially if some models are brand new and you don't have a rate card yet. We solved this with a single function in the shared core: cost.py def compute_cost_usd( model: str, input_tokens: int, output_tokens: int ) -> tuple[float, bool]: rates = ( RATES.get(model) or RATES.get(DEFAULT_MODEL) or FALLBACK_RATES ) pricing_unknown = model not in RATES cost = ( input_tokens * rates[0] + output_tokens * rates[1] ) / 1_000_000 return cost, pricing_unknown The pricing_unknown boolean is the key design choice. It lets you distinguish "this model costs $0.00" from "we have no rate card for this model." Both get stored in the database. Dashboard queries can filter on pricing_unknown=false to get accurate cost reports, and flag the rest for review. This sounds like a small thing, but it is the difference between a cost dashboard you can trust and one you can only squint at. The moment you add a new model mid-quarter, all the rows without rate cards become noise unless you have a way to mark them as intentionally unknown. The flag makes the unknown explicit. 05 what shipped After the refactor, switching from Claude to Grok for backend processing is a one-line environment change: Production config PROCESSING_BACKEND_MODEL_NAME=grok-4-fast-reasoning No code changes. No deploys beyond config. The production split looks like this: Layer Model Why Chat (BFF) claude-sonnet-4-6 Latency, tool use, SSE streaming Analysis + Drafts grok-4-fast-reasoning Throughput, reasoning depth, cost Agents pipeline grok-4-fast-reasoning Same as above Default (safe) claude-haiku-4-5-20251001 Cost floor for dev and staging The thing I keep coming back to is how invisible the split is once it is in place. The calling code does not know which provider it is hitting. The error handling is identical. The cost is tracked the same way. The abstraction is thin enough to see through when you need to debug, but thick enough to absorb provider changes without touching business logic. That is the goal of any good infrastructure layer. Not to be clever. To be boring in the right ways. 06 when to do this Not every AI product needs a model split. Most early-stage products should run on one model, one env var, and ship fast. Premature abstraction in infrastructure is a real tax. But there are three signals that tell you it is time. The first is when you have a user-facing completion path and a background processing path that have been running long enough to have visible cost and latency profiles. These jobs have different SLAs by definition. The moment you can articulate the difference, you are ready to split. The second is when cost-per-request starts showing up in conversations that are not engineering conversations. When a finance review or a product prioritization meeting includes "our model costs are higher than expected," the split has already paid for itself in the time it buys you to investigate. The third is when a new model ships and you want to evaluate it for specific workloads without a rewrite. The prefix-routing pattern costs almost nothing to implement and makes that evaluation a config change instead of a project. New models are shipping fast right now. The flexibility compounds. The rule I would give anyone: do not do this prematurely. One model, one env var, ship it. Add the split when you can name the specific tradeoff you are trying to capture, not before. The architecture should follow the understanding, not precede it. --- # Token Anxiety: How AI Rate Limits Hijacked My Brain URL: https://niranta.blog/ai/token-anxiety/ Category: AI Published: April 26, 2026 Read time: 7 min Words: 1108 My entire day-to-day existence is now tethered to tokens. When they run out, I don't just lose productivity. I lose my damn mind. I recently had a quiet, slightly terrifying realization: my entire day-to-day existence is now tethered to tokens. Not crypto tokens. Not auth tokens. Just plain LLM tokens, those invisible, ever-dwindling units of compute that power every interaction with Claude, Gemini, Cursor, Codex, and the rest of my digital crew. And when they run out? I don't just lose productivity. I lose my damn mind. My stack has become a carefully balanced (and stupidly expensive) fleet of AI "employees" I treat like a chaotic little startup. Claude (on the Max plan) handles the serious coding, where I'm constantly riding the 5-hour and 7-day limits like a joyrider. Cursor ($60/month) lives inside my IDE as the ride-or-die pair programmer. Codex CLI is my go-to for quick command-line wizardry, while the regular Codex tackles code reviews and rapid mocking. DALL·E turns half-baked ideas into visuals, Gemini CLI grinds through ops and infrastructure, and Greptile patiently waits to devour any pull requests that come its way. I spread my usage across all of them like a responsible adult diversifying a portfolio, squeezing maximum output every single day. It usually works beautifully. Until the entire fleet decides to go on synchronized strike. 01 when the fleet goes dark That's exactly what happened last week. Everything dried up at once: Claude out, Cursor throttled, Gemini a desert, Codex CLI silent. Even poor Greptile looked bored out of his silicon mind with zero PRs to review. I sat there staring at my blank screen like a man whose phone, wallet, car keys, and coffee machine had vanished. No agents to bounce ideas off, no instant reviews, no quick vibe checks. Life felt aimlessly, existentially pointless. I found myself refreshing the Claude dashboard every 47 seconds, counting down the three hours until my next reset like it was New Year's Eve. That moment of total shutdown is what finally made it crystal clear: tokens aren't just a tool or a subscription anymore. They've become the oxygen, the dopamine, and the entire operating system of my brain. 02 the psychology of agentic thinking This realization led me straight into understanding the deeper psychology at play. My thinking has become fully agentic, I offload, iterate, parallelize, mock, and review everything with my AI team. They're no longer simple assistants; they're co-pilots, rubber ducks, creative partners, and late-night coworkers. So when the tokens vanish, it feels like the whole office quit overnight, leaving me wondering what skills I even have left on my own. The anxiety is low-grade but constant: the paranoia of "am I wasting tokens on this prompt?" The way I watch usage meters like a deranged day trader. The guilt after blowing a massive context window. And the sudden withdrawal when everything resets? It's a brutal motivation crash. My brain has been rewired for instant AI gratification, and cold turkey hits harder than I expected. 03 the analog coping rituals During those forced dry spells, though, I started noticing something surprising. I rediscovered an entire offline life I'd almost forgotten. I'd cook actual meals (chopping vegetables beats prompting "give me a 30-minute dinner recipe"), clean my office (the dust bunnies were forming their own startup), call real friends (not the ones living in my terminal), or just go for walks without narrating every thought to Claude for "idea synthesis." These analog breaks, as annoying as they felt at first, became weirdly sacred. They reminded me I'm still a human with hands, a kitchen, and relationships that don't bill by the token. 04 burn baby burn Then, just as the downtime was starting to feel endless, relief arrived like manna from heaven. Opus 4.7 got some rate-limit breathing room. My son and I looked at each other, grinned like maniacs, and decided: we're getting that lost week back tonight. "Burn baby burn" became our battle cry. We hammered Opus 4.7 for 13 straight hours, all night long, with code flying, ideas exploding, and Greptile finally getting the PR feast he'd been waiting for. Tokens were incinerated at industrial scale. It was chaotic, exhausting, and one of the most fun father-son nights we've had in months. We shipped more in that single all-nighter than I had in the previous several days combined. That wild recovery perfectly captures the strange duality of token anxiety. The limits drive you crazy with frustration and aimlessness… but when the gates finally open again? Pure, glorious, slightly unhinged productivity rushes back in. 05 will governments tax your ai employees? All of this has me thinking about the bigger picture. I'm basically running a tiny startup inside my head, employing a whole squad of AI workers 24/7 who code, review, design, operate, and brainstorm. I pay them in tokens, real money flowing straight to Anthropic, OpenAI, Google, and the others. So I can't help but wonder: when will governments notice? Will token usage eventually get taxed like payroll? Will my monthly AI bill include a new "digital employee tax" line item? Could we see AI labor regulations that treat heavy users like employers suddenly responsible for benefits or minimum "wage" equivalents for their silicon staff? It sounds ridiculous today, but five years ago the idea of paying hundreds a month for AI coworkers smarter than most juniors sounded ridiculous too. If I'm already having all-night father-son token-burning sessions like a startup crunch, the taxman probably isn't far behind. Maybe local and open-source models will finally kill token anxiety for good. Or maybe the anxiety just evolves into "tax anxiety," and we all start optimizing for deductible AI spend. Either way, this feels like the new normal. --- # Society of the Snow: What a Rugby Team's 72 Days in Hell Taught Me About Real Teamwork URL: https://niranta.blog/personal/society-of-the-snow/ Category: Personal Published: April 26, 2026 Read time: 8 min Words: 967 A rugby team survives 72 days in the Andes after a plane crash. What their story reveals about teamwork, resilience, and what it really takes to survive when everything falls apart. My friend recommended Society of the Snow with her usual high bar. She said it was a must watch, that it was the best movie she ever watched. And I know her. I've watched her taste in films prove itself over almost a year now. I take her recommendations seriously because she's earned that trust. I watched it knowing it would teach me something. IMDB 7.8, Rotten Tomatoes 90%. I watched it today, April 26, expecting a standard survival movie about a plane crash. I was wrong. It hit differently. Knowing it's a true story made every frame heavier. By the end, I sat in silence, thinking about life, teams, and what it really takes to survive when everything falls apart. She was right. 01 the story (no major spoilers) In 1972, a Uruguayan rugby team charters a plane to Chile for a match. The plane crashes into the Andes. No food. Freezing cold. No rescue coming. What follows is 72 days of hell, avalanches, starvation, impossible choices, and an extraordinary display of human will. The film doesn't sensationalize the worst parts. It shows the quiet, grinding reality: how ordinary young men became a functioning "Society of the Snow." 02 what hit me hardest: rugby didn't just prepare them, it saved them These weren't elite survivalists. They were amateur rugby players. But that "small amount of training" mattered more than any wilderness course could have. Rugby gave them: Discipline and hierarchy , A clear captain (Marcelo) who stepped up immediately. Roles emerged naturally. Someone handled medical care, another engineered water systems from wreckage, others maintained morale. Mental and physical resilience , They pushed through pain most of us can't imagine. Broken bones, infections, -30°C nights, and still they kept moving. Team-first DNA , "We worked as a team, a rugby team, there was never a fight," one survivor said. They sacrificed for each other. They made pacts. They carried the weak. They grieved together and kept going. The movie shows this beautifully, not with dramatic speeches, but through small, consistent acts: sharing warmth, rotating sleeping spots, encouraging the hopeless, and eventually sending three men on a suicidal trek to find help. I walked away convinced: Rugby saved their lives. Not because they were stronger individually, but because they already knew how to operate as one body. 03 what this means for life and work We all face our own "Andes." A sudden layoff. A failed project. A health crisis. A market crash. A toxic culture. When the plane goes down, the people who survive aren't the smartest or the strongest, they're the ones who already know how to function as a team. Here's what I took away: 04 1. real teams close gaps, they don't create them In the fuselage, there were no silos. The "rugby players" and the "non-players" became one unit. In companies, we talk about cross-functional collaboration, but most teams still protect their turf. The survivors teach us: when survival is on the line, you either operate as one organism or you die separately. They had no special gear. What they had was the habit of pushing through discomfort from years on the rugby pitch. In corporate life, we often over-index on strategy and under-index on building mental toughness and physical stamina in our people. The best teams I've seen treat resilience like a muscle, they train it daily through hard conversations, stretch goals, and supporting each other when it gets ugly. Marcelo didn't have authority from HR. He earned it by staying calm, organizing, and giving people hope when there was none. In companies, the real leaders during a crisis are rarely the ones with the biggest titles. They're the ones who organize the "society", who keep people fed (metaphorically), warm, and moving forward. The most powerful moments aren't the dramatic ones. They're the quiet decisions: "If I die, use my body so the others can live." That level of trust doesn't appear in a crisis. It has to exist before the plane crashes. The best companies I know have this, people who would go to the wall for each other because they've built that bond over time. 05 final thought Society of the Snow isn't really about cannibalism or even survival. It's about what happens when a group of ordinary people decide they will not let the mountain win, and they refuse to let each other go. In life and in business, the mountains will come. The question is: when the fuselage is buried in snow and hope is gone, who are you standing with, and more importantly, who is standing with you? My friend was right. This movie has meaning. It reminded me that the strongest teams don't just work together. They survive together. And sometimes, that's the only thing that matters. --- # What I'm Reading in 2026 URL: https://niranta.blog/personal/reading-list-2026/ Category: Personal Published: April 2, 2026 Read time: 13 min Words: 4857 Seven areas I want to get better at this year. Not a curriculum. Not a schedule. Just a honest accounting of where the gaps are. 01 AI in Medicine & Cancer 01 Beating Cancer with AI Stephen DeStefano, 2025 A personal account of navigating late-stage cancer through research and AI collaboration with doctors. A real-world example of what AI plus human agency can do when the stakes are highest. View on Goodreads 02 Artificial Intelligence in Oncology Sachi Nandan Mohanty et al., 2025 A comprehensive technical guide to AI analyzing tumor genetics, medical imaging, predicting patient outcomes, and personalizing cancer treatment. The frontier of AI changing medicine. View on Goodreads 03 Artificial Intelligence in Cancer Rishabha Malviya et al., 2025 Machine learning, digital twins, and AI-accelerated drug discovery for cancer detection, diagnosis, and treatment. The bridge between longevity science and the AI revolution. View on Goodreads 02 AI, World Change 04 Elon Musk Walter Isaacson, 2023 The definitive biography of the man building xAI, Tesla, SpaceX, and Neuralink simultaneously. How one person at the intersection of AI and manufacturing is already bending the arc of history. View on Goodreads 05 Co-Intelligence: Living and Working with AI Ethan Mollick The most practical book on collaborating with AI as a genuine thinking partner. Required reading for anyone building with these tools daily. View on Goodreads 06 Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI Karen Hao, 2025 An insider account of the AI race, the power dynamics, the ethics, and what's actually happening inside the organizations building these systems. View on Goodreads 07 Nexus Yuval Noah Harari, 2024 On information networks evolving into AI and reshaping human civilization, how the systems we build to process information end up processing us. View on Goodreads 08 Genesis: Artificial Intelligence, Hope, and the Human Spirit 2026 A forward-looking examination of AI's impact on creativity, meaning, and what it means to be human after this transition. View on Goodreads 09 The Coming Wave Mustafa Suleyman On AI's unstoppable momentum and the containment problem, written by someone building it who is also genuinely worried about it. View on Goodreads 10 The AI Ideal: AIdealism and the Governance of AI Niklas Lidströmer Ethical frameworks and the gap between what we say we want AI to do and the governance structures we're actually building. View on Goodreads 11 If Anyone Builds It, Everyone Dies Eliezer Yudkowsky & Nate Soares A provocative, uncompromising case on superintelligence risk. Read it to understand the strongest version of the argument against fast AI development. View on Goodreads 12 The Thinking Machine Stephen Witt, 2025 The story of Nvidia and the hardware revolution underpinning everything in AI. The picks, shovels, and the people who built them. View on Goodreads 03 Understanding the Universe 13 Webb's Cosmos Marcin Sawicki, 2025 JWST's revolutionary images and discoveries about the early universe, exoplanets, and star formation. The universe through the sharpest eyes we've ever built. View on Goodreads 14 Why Do We Exist? Hakeem Oluseyi, 2026 Cosmic origins, quantum realms, and the precise conditions that make life, and us, possible. The question underneath all the other questions. View on Goodreads 15 The Echoing Universe Emma Chapman, 2025 How radio astronomy reveals the invisible cosmos, the signals no human eye has seen and the new frontiers of observation they're opening up. View on Goodreads 04 Quantum Physics & Computing 16 Helgoland Carlo Rovelli The most accessible account of the quantum revolution, how a 23-year-old Heisenberg rewrote physics on a windy island, and what it still means a hundred years later. View on Goodreads 17 Quantum Computing in 2026: A Real-World Guide to Qubits, Code, and the Coming Quantum Economy 2026 A non-technical guide to how quantum computing is already reshaping finance, medicine, and cryptography, for people who build things, not just study them. View on Goodreads 18 Understanding Quantum Technologies 2025 Olivier Ezratty, Updated 2026 The comprehensive reference on where quantum hardware, software, and applications actually stand versus the hype. View on Goodreads 19 Quantum Computing for Everyone Chris Bernhardt The clearest entry point into quantum mechanics and computation for anyone without a physics background. Still the best starting point. View on Goodreads 05 Peptides & Longevity 20 The Peptide Handbook 2026 Nina Tigre, 2026 A clear, up-to-the-minute guide to peptide science, GLP-1, safety, and longevity applications. View on Goodreads 21 Peptides for Health Mark Gordon, M.D. Therapeutic peptides for healing, neuroregeneration, immunity, and longevity from a clinician's perspective. View on Goodreads 22 Outlive: The Science and Art of Longevity Peter Attia, M.D. Still the definitive framework on preventing chronic disease through exercise, nutrition, sleep, and emotional health. View on Goodreads 06 Business, Energy & Global Trade 23 The Prize Daniel Yergin, Updated edition The definitive history of oil, money, and power. Essential for understanding how energy has shaped the modern world, and how AI's massive energy demands will reshape it again. View on Goodreads 24 The World for Sale Javier Blas & Jack Farchy The hidden world of commodity traders who control the oil, metals, and grain that run the global economy. Eye-opening on how power actually flows in the world. View on Goodreads 25 Memory Makes Money Harry Lorayne A practical guide from the late master of memory technique. Lorayne built a career on showing that recall is a trainable skill, not a fixed trait, and the business case for keeping names, numbers, and details in your head is the spine of the book. View on Goodreads 07 Holistic & Mindset 26 Atomic Habits James Clear The systems book I keep coming back to. Not about motivation. About architecture, how to make the right behavior the path of least resistance. View on Goodreads 27 The Let Them Theory Mel Robbins On boundaries, letting go, and conserving energy for what actually matters. The mental complement to everything else on this list. View on Goodreads --- # How I Build Software with AI URL: https://niranta.blog/ai/ai-orchestrated-development/ Category: AI Published: March 25, 2026 Read time: 9 min Words: 1268 A spec-driven workflow layering Claude Max, gstack, and Superpowers into a multi-model, multi-defense-layer engineering system. I spend most of my waking coding hours, which is not a lot, inside Claude Code on the Max plan. It's not just my primary tool, it's the conductor of an entire orchestra of specialized agents, skills, and cross-model debates. Over time, I've refined a highly structured, spec-driven workflow that consistently delivers clean, production-ready features with surprisingly few bugs. The secret isn't any single trick. It's the deliberate layering of human taste, rigorous process, and AI specialization, all tuned for maximum quality through multiple defense layers while keeping my focus narrow and high-leverage. 01 What makes this setup distinctive After diving into the 2026 Claude Code community, I realized many pieces of my workflow are popular, but the full orchestration is quite personal. Two open-source skill packs dominate serious Claude Code users. For those new to this, these are essentially plugin libraries you install to give Claude new specialized commands. gstack , by YC CEO Garry Tan This is basically my virtual engineering team. It includes the exact commands I rely on daily: /office-hours (product interrogation), /plan-ceo-review , /plan-eng-review , /qa (with real browser testing), /cso (security), /review , /codex (OpenAI second opinion), /ship , /browse (Playwright-powered), and even /canary . It turns Claude into a full sprint process: Think → Plan → Build → Review → Test → Ship → Reflect, with strong guardrails and user sovereignty. Superpowers , by Jesse Vincent This powers my spec-first discipline with /superpowers:brainstorming , writing-plan , parallel agents, and TDD enforcement. It also includes powerful visual tools that evolved into what I use as /visual-explainer , turning complex plans and terminal output into clean HTML diagrams and architecture visuals. Other distinctive elements in my flow: Early Grok vs. Gemini pitting to surface clarity gaps before any planning begins. Parallel adversarial + multi-skill reviews at multiple gates. Scheduled agents running every 3 hours that test APIs and UX flows, log issues, and update TODO.critical.md + TODO.md (features stay 100% human-driven). Greptile review loops + ongoing experimentation with /ultrareview (cloud multi-agent deep reviews with sandbox verification). Isolated worktrees with submodules + Figma MCP for true component-driven development. Heavy reliance on /visual-explainer during architecture, security, and design phases. While many developers use pieces of this (SDD is now mainstream, with tools like GitHub Spec Kit and plan mode in Claude Code), the complete "human conductor + multiple defense layers + automated bug maintenance" system feels like my evolved personal operating system. 02 The quality engine: multiple defense layers This is where the real power shows up. I don't chase breadth. I deliberately narrow my focus to what only a human can do well: product taste, architecture decisions, security thinking, and user experience. Everything else gets filtered through rigorous layers so I can spend the majority of my time reading and understanding code rather than writing or debugging it. Here's exactly how an idea travels through the defense layers in practice: 01 The Crucible Detailed idea → Grok/Gemini debate → /office-hours or brainstorming. 02 The Blueprint /plan-ceo-review → /plan-eng-review → writing-plan + multi-review (plan-eng + codex + adversarial). 03 The Assembly Parallel-agent build. 04 The Gauntlet Parallel post-build reviews (codex + review + plan-eng-review + adversarial). 05 The User Test /qa + /browse ( Playwright ) + /design-review when UI is involved. 06 The Final Polish /ship → Greptile loop → Ultrareview experimentation. The result: dramatically fewer bugs reach me or production. Because the filters are so strong, I get to operate with a narrower scope and deeper attention. I'm not context-switching across a thousand micro-issues. Instead, I'm deeply engaged with UX flows, component architecture, and security implications, often visualized cleanly via /visual-explainer . This multi-layer approach is what lets me treat features as high-craft work while the system handles the volume of validation and maintenance. 03 The supporting cast Claude Max , Extended context and high limits are non-negotiable for long-running agent sessions and large codebases. claude mem + persistent CLAUDE.md patterns for cross-session continuity. Gemini CLI for documentation and occasional fresh perspective (I still jump into Cursor sometimes too). Ubuntu VM QA environment + push to Canary (where Claude involvement stops). 04 What's next In my next post, I'll go deeper into the monitoring side: how I set up continuous system observation, feed real signals back into the TODO system, and let scheduled agents constantly hunt and fix issues, all while I stay focused on the high-value creative and architectural work. This whole setup has turned Claude Code from a powerful coding assistant into something closer to a small, high-discipline engineering organization that I personally conduct. The compound effect on quality and focus has been massive. If you're on Claude Code Max, start with gstack and Superpowers, they're free and transformative. Then layer in your own model-pitting rituals and maintenance agents. The quality compounds fast. What's one defense layer or ritual you've added to your own flow? I'd love to hear about it. --- # Get Comfortable Being Uncomfortable URL: https://niranta.blog/quotes/get-comfortable-being-uncomfortable/ Category: Quotes Published: February 6, 2026 Read time: 5 min Words: 763 Growth lives on the other side of discomfort. Every single time. Not the side of comfort. Not the side of good intentions. The side of actual, physical, psychological discomfort. The side where you feel like you don't belong. Where you're not sure you're good enough. Where things are hard and you want to quit. Most people avoid that side. So they never grow. 01 The comfort trap Comfort is a cage disguised as home. It feels safe. It feels like where you belong. And it is, which is exactly why you need to leave it. Your comfort zone is optimized for your current skill level. Everything in it is familiar. Everything you can already do. Everything is manageable. And none of it will make you better. You can fail and learn. You can be rejected and come back stronger. But if you're comfortable, you'll never do either. You'll just stay small. 02 What discomfort really means Discomfort isn't injury. It's not danger. It's just unfamiliarity. It's the feeling you get when you're doing something you haven't done before. When you're with people who are better than you. When you're trying something hard. Your brain interprets unfamiliarity as danger. So it sends discomfort signals. "Hey, you might get hurt. Let's go back to the comfort zone." But you won't get hurt. You'll just feel weird for a while. And then you'll adapt. And then you'll be better. 03 The gradient of growth You don't jump from comfort to impossibility. You move in gradients. A little uncomfortable. Then a little more. Then more. With each step, your new discomfort becomes tomorrow's comfort. The person who speaks in front of 10 people thinks speaking in front of 1,000 is impossible. Until they speak in front of 100. Then 500. Then suddenly 1,000 feels manageable. Same person. Same capability potential. Different comfort zones. 04 The compound effect of discomfort Here's what's magical: once you get comfortable being uncomfortable, you can move fast. You're not afraid of new situations. You're not paralyzed by unfamiliarity. You're just slightly nervous, which is actually ideal for performance. You take on bigger challenges. You meet better people. You learn faster. You fail more (and learn more from it). And all of that compounds. 05 Your discomfort threshold This doesn't mean constant anxiety. There's a difference between productive discomfort and destructive stress. Productive discomfort is "I'm nervous about this presentation" (totally fine, even helpful). Destructive stress is "I might have a panic attack" (worth addressing differently). Know your own threshold. Push just past it. Not way past it. Just past. Most people know this intellectually. They know comfort = stagnation. They know growth requires discomfort. They just choose comfort anyway. Because discomfort is hard. Because it's easier to stay small. Because growth requires sustained effort instead of momentary motivation. But the cost of staying comfortable is invisibly high. It's all the things you never tried. All the relationships you never built. All the potential you never explored. Five years of staying comfortable costs you far more than five years of living slightly uncomfortable. You don't need to reinvent your life. Just find one thing that makes you slightly uncomfortable. Something that feels just outside your current zone. Join the group. Take the class. Have the conversation. Apply for the role. Write the piece. Feel the discomfort. Do it anyway. And notice how you grow. Comfort is a choice. So is growth. Pick one. --- # Two Ways to Do It: Right or Again URL: https://niranta.blog/quotes/two-ways-to-do-it-right-or-again/ Category: Quotes Published: January 30, 2026 Read time: 4 min Words: 662 There are only two ways to do anything: right the first time, or again. This isn't motivational nonsense. It's just math. If you do something wrong, you have to do it again. So the question isn't "should I rush?" but "do I want to do this once or twice?" Most of us choose twice. We tell ourselves we're saving time by rushing. But we're not. We're just moving the work to later when it's more expensive to fix. 01 The cost of corners Cutting corners always feels good in the moment. You move faster. You feel productive. You're getting things done. Then you have to do it again. And this time, it's worse. Because now you have to undo the wrong thing before you can do the right thing. You save 30 minutes of careful work and spend 2 hours fixing it. The math is simple. But we don't think about it in the moment because we're focused on the immediate gain. 02 Right the first time Doing it right the first time doesn't mean perfection. It means doing it with the care and attention it deserves. It means understanding what you're doing. It means checking your work. It's slower in the moment. But it's faster overall. You write the code carefully. You test it. You don't move on until it works. Then it works. Forever. You don't have to come back. You don't have to fix it. 03 The compounding effect Here's what's insidious: the rework compounds. You do something halfway. Then you do it again halfway. Then you have to spend real time fixing it. By then the codebase is a mess, the relationship is damaged, the system is fragile. But if you do things right the first time, they compound in the other direction. You build on solid ground. The next thing is easier because the foundation is good. 04 When speed matters There are times when you genuinely need to move fast. You're experimenting. You're exploring. You're prototyping. In those cases, you explicitly choose "again" mode. You know you're going to throw it away or rebuild it. You move fast because you're not trying to do it right. But then when it comes time to do it right, you start over. You don't iterate your way to quality. You rebuild with care. Be clear about which mode you're in. Fast and rough, or slow and careful. Don't try to be both at once. 05 The discipline of once There's a discipline to deciding you're only going to do something once. It forces you to slow down. To ask questions. To understand what you're doing. It's harder in the moment. But it's so much easier in the end. Pick something you're about to do today. And decide: am I going to do this right the first time, or twice? Then do what you decided. Once is always faster than twice. --- # Don't Ruffle Feathers Unless Necessary URL: https://niranta.blog/quotes/dont-ruffle-feathers-unless-necessary/ Category: Quotes Published: January 16, 2026 Read time: 5 min Words: 626 Social friction is expensive. Being the person who always pushes back, always questions, always challenges, it comes at a cost. And the cost isn't always worth it. This doesn't mean don't stand up for important things. It means know the difference between what's worth ruffling feathers over and what isn't. 01 Why social capital matters You have social capital with every person and group you're part of. Your reputation. Your credibility. Your political currency. When you use it on small things, you deplete it. When you use it on big things, it matters. The person who challenges everything has no capital left for the moment when something actually matters. They've cried wolf so many times that no one listens. This is why quiet people often have more influence than loud people. They've conserved their capital. When they speak up, people listen. 02 The wisdom of discernment There's an old principle: choose your battles. It sounds cowardly to the young. Until you realize that every battle fought is a battle that weakens you for the next one. The strategic question isn't "should this be different?" It's "is this worth my capital, energy, and relationships to change?" Sometimes the answer is no. Sometimes it's better to let it pass. To accept that this particular thing isn't perfect and that's okay. 03 The real feathers to ruffle What is worth your capital? Things that matter to your values or your work. Things that will compound. Things that affect not just you but others you care about. Starting a company isn't ruffling feathers, it's building something. Standing up to injustice isn't ruffling feathers, it's character. Pushing back on a terrible idea that will hurt your organization isn't ruffling feathers, it's leadership. But complaining about the office temperature? Correcting everyone's grammar? Being the pedant in meetings? That's ruffling feathers for ego, not principle. 04 The cost of constant friction There's a type of person who is always uncomfortable, always pushing, always in friction with the system. They think this is integrity. Often it's just exhausting, for them and everyone around them. They're difficult to work with. Difficult to befriend. Difficult to be around. Not because they're principled, but because they've confused constant friction with character. Real strength is the ability to be okay with imperfection when it doesn't matter, and immovable when it does. 05 Finding your line You need to know your own line. What do you actually care about? What will you stand up for? What are you willing to damage relationships over? For some it's integrity. For some it's fairness. For some it's honesty. For some it's growth. For some it's efficiency. Know your line. Then be at peace with everything else. That's not weakness. That's wisdom. --- # We Miss 100% of the Shots We Don't Take URL: https://niranta.blog/quotes/we-miss-100-of-the-shots-we-dont-take/ Category: Quotes Published: December 26, 2025 Read time: 5 min Words: 707 Wayne Gretzky said it. Everyone quotes it. But most people don't actually live it. We miss 100% of the shots we don't take. It's mathematically trivial and profoundly true. The only sure way to fail is to never try. Yet we spend our lives not trying. Building elaborate systems of hesitation, perfectionism, and self-doubt to guarantee the shots never leave our hands. 01 The shots we're afraid to take Most of us have a list. The book we're too scared to write. The conversation we're too nervous to start. The opportunity we're too uncertain to pursue. We tell ourselves we're not ready. That we'll try next year. That we need to prepare more, get more credentials, lose more weight, find a better partner, get the timing right. But the bar for "ready" keeps moving. It's always tomorrow. Always next month. Always when conditions are perfect. The cost of not taking the shot is invisible. You don't see the life you didn't live. You don't meet the people you didn't connect with. You don't experience the growth that comes from failure. So the cost feels like zero. But it's not. It's everything. 02 What taking the shot means Taking the shot doesn't mean you have to be good at it. It doesn't mean you have to make it. Gretzky's statement isn't about your batting average. It's about your attempt rate. You can miss every shot you take. But at least you'll know. You'll have data. You'll have experience. You'll have moved from theory to practice. And something strange happens: your percentage improves. Not because you got magically better, but because you're playing. You're in the game instead of in the stands. 03 The feedback loop There's a feedback loop that starts with taking the shot. You take the shot. You miss or make it. You learn something. You take another shot. You miss or make it. You learn again. This loop is the engine of all improvement. Not reading about taking shots. Not thinking about taking shots. Actually taking them. And the earlier you start taking shots, the more shots you'll have taken by the time it matters. That's compounding. 04 The types of shots Some shots are small. Asking someone out. Writing a piece. Taking a class. Starting a conversation. These shots have low stakes and high information value. Some shots are medium. Switching careers. Moving cities. Starting a business. These have higher stakes, but they're still survivable if you miss. Some shots are huge. Major life bets. These deserve more deliberation. But even these get better with smaller shots first. You build your shot-taking muscle on small shots first. Then medium ones get easier. Then you can even handle the huge ones. 05 Start with something small Don't wait for the perfect shot. Take a small one today. Write something. Reach out to someone. Ask for feedback. Apply for something you think you're not ready for. You don't have to make it. You just have to take it. Because one day you'll look back and realize: all your best outcomes came from the shots you took. Not from the ones you were still preparing to take. --- # Seek Forgiveness, Not Permission URL: https://niranta.blog/quotes/seek-forgiveness-not-permission/ Category: Quotes Published: December 19, 2025 Read time: 4 min Words: 1040 There is a profound difference between asking for permission and asking for forgiveness. Permission is a gatekeeper's tool. It puts someone else in charge of your forward motion. It makes your progress conditional on their approval, their timeline, their comfort. Forgiveness is different. It moves first, then explains. 01 Why we ask for permission We ask for permission because we're taught to. Respect, authority, deference, these are real things. And sometimes, permission-asking is genuinely required. Legal boundaries. Regulatory frameworks. Actual gatekeepers with legitimate power. But most of the time, we ask for permission where none is actually required. We ask our boss for permission to leave early, when we could just leave early and explain why we did. We ask stakeholders for permission to try something new, when we could experiment quietly and show results. We ask the market for permission to exist, when the market doesn't grant permission, the market responds to what's already there. Permission-seeking is often just deference masquerading as respect. It's a way to defer responsibility. If they say no, it's not your fault. If they say yes and it fails, you can blame their approval. But there's a cost to this approach: you've given someone else the power to determine your pace. 02 The power of forgiveness-seeking When you move first and ask for forgiveness after, the power dynamic inverts. Now you're not asking them to let you do something. You've already done it. You're asking them to understand why. You're presenting reality, not asking for hypothetical approval. This is not about being reckless. It's about capturing the upside of action. Most people are terrible at imagining what's possible. They're good at critiquing what already exists. They can see flaws in something concrete. But ask them to approve something that doesn't exist yet? They'll find a hundred reasons to say no. Show them the thing you've built, and suddenly they can have a real conversation about it. Not "Should we do this?" but "Here's what we did, what do you think?" The second conversation is far more productive. And you're in a stronger position: you've proven you can execute. You've taken the risk. And now you're asking for support or input on something real. 03 The calculus changes When you seek permission, you're asking someone to bet on your potential. On your vision. On what you might build if they say yes. Most people will say no. It's the safe choice. It's the path of least responsibility. When you seek forgiveness, you're asking someone to recognize what you've already done. You're presenting evidence. You've de-risked the entire conversation by going first and capturing the upside. The person you're asking for forgiveness has less power now. They can't kill your idea, it's already alive. They can only respond to what exists. Critique it, sure. Improve it, absolutely. But they cannot uncreate it. This shift in power is the entire point. You're not looking for permission to try. You're looking for support for what you've tried. 04 When permission really matters There are limits to this principle. Some things do require real permission. If there are genuine legal or regulatory constraints, you can't ignore them. If your organization has decision-making authority that you genuinely lack, you can't override it without consequences you're willing to face. The move-first strategy works best when you're operating in a space where the actual authority is ambiguous. When people assume they have gatekeeping power they don't actually have. When the worst case for acting is mild disappointment rather than actual harm. It also works when the person you're asking is fundamentally aligned with you. They want you to succeed. They're just risk-averse or bogged down in process. Show them something real, and they'll usually find a way to support it. 05 The identity shift The deeper point isn't really about tactics. It's about identity. When you ask for permission, you're positioning yourself as someone waiting for approval. Someone whose agency is conditional. Someone whose role is to request, not to decide. When you move first and ask for forgiveness, you're positioning yourself as someone who takes action. Someone who takes responsibility for outcomes. Someone who moves, learns, and adjusts. Over time, these positions calcify. People start to see you as either a requester or a doer. They behave differently around you based on which category you've placed yourself in. The smartest move is to start small with this. Don't move first on your biggest bet. Move first on something manageable. Do it, show it, ask for forgiveness. Let them see that you're the kind of person who can execute. Then, when you want to move on something bigger, they're already primed to say yes. Because by then, it won't be a theoretical question anymore. They'll know you can do the thing. The only question will be whether they want to support it. And that's a conversation you can actually win. --- # Heart Strong, Mind Stronger URL: https://niranta.blog/quotes/heart-strong-mind-stronger/ Category: Quotes Published: November 28, 2025 Read time: 5 min Words: 626 There's a hierarchy of strength that most people get wrong. They think it goes: physical > mental > emotional. As if toughness is a progression, and once you're mentally tough, emotional resilience follows naturally. In reality, it's inverted. The strongest people aren't the ones with the thickest skin. They're the ones with the strongest hearts, the deepest capacity to feel, to care, to be moved by things that matter. 01 The Heart-Mind Connection Consider what it takes to be moved by something. To let it in, to feel it fully, to let it change you. That requires vulnerability. That requires openness. That requires a willingness to be affected. A strong mind protects you. A strong heart exposes you. But that exposure is where growth lives. The person who can hear criticism and let it matter, who doesn't dismiss it or defend against it, that person grows. The person who can witness suffering and let it touch them, not as a moment of weakness but as a moment of connection, that person becomes more wise, more compassionate, more whole. 02 The Soft Strength There's a kind of strength that looks like weakness. It's the strength of saying "I don't know" when you don't. Of admitting "I was wrong" when you were. Of crying when something matters that much. We've created a world that celebrates hard strength, the mental toughness that shuts things out, the discipline that overrides feeling, the logic that dismisses intuition as unreliable. But hard things break. Soft things bend and stay whole. The strongest oak tree is the one that bends in the wind. The strongest person is the one who can feel deeply and still move forward. 03 Where the Two Meet The highest human capacity is where a strong mind meets a strong heart. A mind sharp enough to know what matters. A heart open enough to care about it fully. You need the mind to navigate. You need the heart to know why you're navigating at all. It's easy to be mentally strong and emotionally closed, to be brilliant but cold. It's easy to be emotionally open and mentally weak, to be moved by everything but unable to act. But the rare thing, the thing that creates real strength, is both. 04 The Practice Building heart strength doesn't come from trying harder. It comes from letting yourself feel more. Let yourself be moved by beauty. Don't intellectualize it away. Let yourself be touched by others' pain. Don't distance yourself from it. Let yourself care about what you care about without apologizing for it or trying to seem untouchable. And then, with that open heart, use your mind to understand it, to act on it, to move thoughtfully in the world. The strongest people aren't the ones with the thickest armor. They're the ones who've learned to have both: a heart that's wide open and a mind that's sharp as steel. It's the combination that makes them unbreakable. --- # We Suffer More Often In Imagination Than In Reality URL: https://niranta.blog/quotes/we-suffer-more-often-in-imagination-than-in-reality/ Category: Quotes Published: November 21, 2025 Read time: 5 min Words: 614 We spend so little time actually present that it becomes hard to tell the difference between what's real and what we've invented. 01 The Anxiety Equation Seneca, the Stoic philosopher, captured it perfectly: "We suffer more often in imagination than in reality." Think about the last time you were anxious about something. How much of that suffering was actually happening in the present moment? How much of it was happening in your mind, in the future you were imagining? The person dreading a difficult conversation spends days in mental anguish. Then the conversation happens, lasts twenty minutes, and isn't nearly as bad as feared. All that suffering was optional. It was imagined suffering. 02 The Multiplication of Pain Here's the cruel mathematics of anxiety: you don't just suffer once. You suffer multiple times. You suffer in anticipation. You suffer in the moment. You suffer in retrospect, replaying it endlessly. Three instances of suffering for one actual event, and two of them were purely voluntary. Meanwhile, what you were actually afraid of? Often it takes less than an hour and isn't nearly as painful as the cumulative dread you created. 03 Breaking the Imagination Habit The antidote isn't positive thinking or denial. It's presence. This doesn't mean ignoring real problems. It means dealing with them now, in reality, rather than rehearsing them in fantasy. The person who prepares for a presentation by actually practicing, who addresses a relationship issue by having the conversation, who faces a health concern by seeing a doctor, these people suffer less. Not because they avoided pain, but because they refused to multiply it. 04 Reality Is Usually Kinder Here's what you learn when you finally stop suffering in imagination and actually step into reality: reality is usually milder than you thought. People are less judgmental than your mind predicts. Failures are less catastrophic. Rejections hurt less and end sooner. The things you can't control turn out to be less important than you feared. And the things you can control? They're easier to fix when you actually address them instead of spinning in anxiety about them. 05 Where to Begin The next time you feel anxiety rising, ask yourself: Is this happening now? Or am I imagining it? If it's imagination, you have a choice. You can keep rehearsing the bad version, multiplying your suffering. Or you can return to what's actually real, this breath, this moment, this room. And if it's something real that needs addressing? Do it. Today. Don't let your mind turn a single moment of real discomfort into days of imagined agony. Most of what we suffer from never happens. And what does happen is always more bearable than we imagined. The only real suffering is the unnecessary kind, the kind that lives entirely in your head. --- # The Main Thing Is To Keep The Main Thing The Main Thing URL: https://niranta.blog/quotes/the-main-thing-is-to-keep-the-main-thing-the-main-thing/ Category: Quotes Published: November 14, 2025 Read time: 5 min Words: 565 We live in an age of infinite distraction. Our attention is fragmented across a dozen devices, our time divided among competing priorities, our energy scattered in all directions at once. In this chaos, we lose sight of what matters most. The paradox of modern life is that having more options, more opportunities, more channels, more ways to spend our time, has made it harder, not easier, to focus on what truly matters. We're pulled in every direction, told that everything is urgent, that every opportunity is critical, that we must optimize every moment. 01 The Core Principle Stephen Covey didn't invent the idea, but he crystallized it perfectly: "The main thing is to keep the main thing the main thing." It's a principle of ruthless clarity about priorities. The challenge isn't identifying what the main thing is, most of us know, deep down. The challenge is keeping it the main thing when life, work, and opportunity constantly conspire to distract us from it. 02 The Cost of Distraction What's insidious about distraction is that it compounds. You don't lose your focus in a single dramatic moment. Instead, you drift. A small detour becomes a habit. A minor opportunity becomes a project. A side interest becomes a second job. Before you know it, you're running in circles, busy but unfulfilled, productive but not purposeful. You're optimizing the wrong things. 03 Ruthless Prioritization Keeping the main thing the main thing requires brutal honesty. It means looking at your calendar, your projects, your commitments, and asking: "Does this serve my main thing, or does it distract from it?" And if it distracts, you eliminate it. Not someday. Now. This is harder than it sounds because distraction often comes dressed up as opportunity. It looks important. It feels urgent. Someone is asking you to do it, and saying no requires courage. 04 The Power of Focus But when you truly commit to keeping the main thing the main thing, something shifts. You stop spinning and start building. Your work compounds. Your influence grows. Your satisfaction deepens because you're actually moving toward something that matters to you, rather than just reacting to everything that comes your way. The best people, in business, in art, in life, aren't the ones who do everything. They're the ones who do one thing exceptionally well, and have the discipline to protect that focus against all distractions. So ask yourself: What is your main thing? And be honest, are you actually keeping it the main thing? Or have you let the noise and opportunity and urgency of everyday life pull you away from what actually matters? The answer will tell you everything about where your life is heading. --- # Inch by Inch Is a Cinch, Yard by Yard It's Hard URL: https://niranta.blog/quotes/inch-by-inch-is-a-cinch-yard-by-yard-its-hard/ Category: Quotes Published: November 7, 2025 Read time: 5 min Words: 606 There's a sports saying: "Inch by inch is a cinch, yard by yard it's hard." It sounds simple. But it contains wisdom about how to approach anything difficult. The difference between success and failure, between finishing and quitting, often comes down to how you frame the goal. 01 The power of small measures When you commit to an inch, it's not scary. You can do an inch. An inch is manageable. An inch is something you can see happening. You move forward, you gain ground, and it's tangible progress. Repeat that a thousand times and you've gone somewhere significant. But you didn't do it by looking at the thousand-inch journey and feeling the weight of it. You did it by taking one inch at a time. When you think "I need to run a marathon," your brain shuts down. When you think "I need to run to the next mailbox," your body moves. That's the difference. 02 The trap of yard-by-yard thinking The problem is our default thinking mode is yards, not inches. We see the big goal. We calculate the total distance. We immediately feel the weight of it. And then we either: Procrastinate, because it feels too big to start Half-step, because giving it all for 1000 inches sounds exhausting Quit, because we misjudge how hard it is People who think in yards burn out. They're so focused on the distance that they forget to notice the progress they're actually making. Everything feels inadequate. 03 Applied everywhere Writing a book: don't think about 80,000 words. Think about 500 words today. You just wrote a chapter in a month. Building a business: don't think about profitability in year 3. Think about closing one sale this week. Repeat 50 times and you have a pattern. Changing habits: don't think about being a "new person." Think about making one better choice today. Make it tomorrow. Make it next week. Six months later, you're unrecognizable. The inch is always doable. The yard is why people quit. 04 Why small matters There's also something psychological about small progress. It's real. It's visible. You can feel it. When you take an inch and then another, you build momentum. You build belief that this is possible. You stop asking "Can I do this?" because you're already doing it. Yards are abstract. Progress is inches. 05 Your move If you're stuck on something, stop thinking yards. Stop calculating the total distance. Stop waiting until you feel ready for the whole thing. Just commit to an inch. Today. This week. A thousand inches gets you a lot farther than a thousand yards of planning. --- # Pain Is Weakness Leaving the Body URL: https://niranta.blog/quotes/pain-is-weakness-leaving-the-body/ Category: Quotes Published: October 31, 2025 Read time: 7 min Words: 877 It's a phrase Marines say. Soldiers repeat it. Tough people live by it. And it's mostly misunderstood. People think it means pain is good. That you should ignore it. That suffering is the path to strength. That's not it at all. 01 What the phrase actually means The real meaning is about adaptation. When you push your body, in training, in work, in life, you hit its limits. Your muscles burn. Your mind screams. Your system rebels. That pain is information. It's your body saying: you've hit the edge of what you can currently do. And then something remarkable happens: you adapt. Your body gets stronger. Your mind gets tougher. Your capacity expands. So when people say "pain is weakness leaving the body," they don't mean you should seek pain. They mean: when you encounter pain while pushing yourself to grow, don't run from it. The pain is temporary. The weakness leaving is permanent. 02 The distinction that matters There's good pain and bad pain. And the difference isn't just physical. Good pain: The burn in your muscles when you're lifting. The mental exhaustion after deep work. The discomfort of learning something new. The tension of a difficult conversation you needed to have. This is the pain of growth. It's temporary. And it makes you stronger. Bad pain: Sharp, stabbing, warning pain. The kind that says something is actually injured. Chronic pain that doesn't resolve. The pain of staying in situations that harm you. The psychological damage of environments that break you down. This pain isn't weakness leaving. This pain is damage. And you need to stop, rest, or get help. 03 The mental component Here's where people really get it wrong. They think toughness means pushing through everything. Real toughness is more nuanced. It's the ability to: Distinguish: Can you tell the difference between growing pain and actual damage? Persist: Can you keep going when it's hard but not harmful? Rest: Can you stop when you need to, without shame? Recover: Can you learn from the experience and come back stronger? The people who end up most broken aren't the ones who pushed hard. They're the ones who couldn't tell the difference, who pushed into actual damage and kept going. 04 Applied to life beyond the gym This isn't just about physical training. It's about work. Learning. Relationships. Everything. When you take on a hard project, you hit mental and emotional limits. That discomfort is weakness leaving. Push through it and you become capable of things you couldn't do before. When you have a difficult conversation, when you set a boundary, when you say something you're afraid to say, that's discomfort. That's weakness leaving. But when you're in a relationship that damages you, a job that depletes you, a situation that breaks you down, that's different. That's not weakness leaving. That's damage accumulating. The strength isn't in staying. The strength is in recognizing the difference and having the courage to leave. 05 The compound effect Here's what gets interesting over years: the people who consistently push through the good pain, who rest from the bad pain, who keep adapting, they become genuinely strong. Not in spite of their pain. But because of how they responded to it. They didn't seek out suffering. They just didn't flinch when growth required temporary discomfort. And they weren't stubborn enough to stay in situations that were actually harming them. That balance, between persistence and wisdom, between toughness and knowing when to quit, is what real strength looks like. 06 So what now If you're in pain right now, the question isn't: "Should I push through this?" The question is: "Is this the pain of growth, or the pain of damage?" If it's growth pain, don't run. Push through. The weakness is leaving. If it's damage, stop. Rest. Recover. The strength is in knowing the difference. --- # All Fart, No Shit URL: https://niranta.blog/quotes/all-fart-no-shit/ Category: Quotes Published: October 24, 2025 Read time: 5 min Words: 674 I hear this phrase sometimes from people who grew up in tough environments: "All fart, no shit." It's crude. It's also spot-on. It describes the person who makes a lot of noise, who talks big, who seems important, who takes up space, but who never actually delivers. No substance. All show. The person who farts is all air. The person with shit has weight. 01 We live in an age of air Social media, podcasts, newsletters, conferences, we're drowning in noise. In commentary. In hot takes. In people who have opinions about everything. Most of it is fart. Sound without substance. Provocation without point. Noise designed to capture attention, not to create value. They have weight. They have shit. 02 The test is simple Can you point to what they built? What they created? What they changed? Or are they just... here? Taking up space in your feed? Commenting on things? Offering unsolicited opinions? The fart people are easy to spot once you know what to look for. They're the ones who: Post a lot but never finish anything Have ideas but never test them Talk about their success more than they demonstrate it Are always in the middle of something big that we'll hear about eventually Spend more time arguing online than creating offline The shit people are different. They: Create in public or not at all Let their work speak Don't need to tell you how good they are Show up to finish things, not just start them Spend their time building, not broadcasting 03 This applies to you too The question isn't "Am I calling out the fart people?" The question is: "Am I being a fart person?" It's easy to be all air. To have a hot take. To share an opinion. To say what you're going to do. We all have some fart in us. Some noise we produce without substance behind it. The goal isn't to be perfect, it's to have less of it and more of the real stuff. 04 The compound effect Here's the thing: fart people stay the same. They're always talking about their big plans. Always about to launch. Always working on something amazing. Shit people change the landscape. Their work compounds. People start to point to them. They become known not for what they say, but for what they've made. After 10 years of fart, you're still at the starting line. Still making promises. Still "about to." After 10 years of shit, you're the person others point to when they want to know how it's done. 05 Choose your work You don't have to be loud to be valuable. You don't have to be everywhere to matter. You don't have to have the biggest platform to create the biggest impact. You just have to do the thing. And finish it. And do it again. Everything else is just noise. --- # Control Is a Myth URL: https://niranta.blog/quotes/control-is-a-myth/ Category: Quotes Published: October 24, 2025 Read time: 6 min Words: 770 We spend our lives chasing an illusion: the idea that we can control our outcomes. We optimize our workflows. We plan our days to the minute. We stack systems on top of systems, trying to design our way into certainty. And then life happens. The client cancels. The partner leaves. The opportunity passes. The health diagnosis changes everything. And we realize: we were never in control. We were just pretending. 01 The difference between process and outcome Here's what's real: you can control your effort. You can control your focus. You can control your response. What you can't control is the result. This isn't pessimism. It's clarity. You can run the best sales process and lose the deal to a competitor with deep pockets. You can write the best post and have it disappear into the algorithm. You can be the best parent and have your kid make choices you wouldn't choose. The process was perfect. The outcome was not yours to guarantee. 02 The prison of perfect planning People who chase control end up paralyzed. They can't move until they've planned for every contingency. They can't start until they're certain of success. But certainty is a lie. It doesn't exist. You can't plan your way into it. The person who does decent work and sends it out learns twice as fast as the person waiting for perfect conditions. The entrepreneur who launches with 70% of what they planned succeeds faster than the one waiting for 100%. Because they're iterating. They're learning. They're adapting based on actual feedback, not imagined scenarios. 03 What you actually control Your effort: You can show up. You can work. You can give your best on the day, with the energy you have. Your attention: You can choose where you focus. What you study. What you practice. What you ignore. Your response: You can't control what happens, but you can control what you do next. The setback came. Now what's your move? Your boundaries: You can choose what you say yes to and what you refuse. Who you spend time with. What commitments you make. Your systems: You can set up processes. Habits. Routines. Things that run on their own and make good outcomes more likely. These are the real levers. Not the fantasy of controlling outcomes. 04 The peace on the other side Here's what's strange: once you accept that you can't control the outcome, you become more effective. Because if you're not responsible for the outcome, you're free to go all-in on the process. You work harder. You focus better. You stay present. Because you're not constantly checking the score. You're just doing the thing you came to do. And paradoxically, that's how you get better outcomes. Not by obsessing over them, but by obsessing over the work that might produce them. 05 Living in reality The world doesn't care about your plans. It doesn't respect your systems. It doesn't reward your hoping. What it does do is respond to your effort. To your adaptation. To your willingness to adjust when reality doesn't match your expectations. The person who can accept that they're not in control, and then do their best work anyway, that's the person who actually moves forward. Everyone else is busy fighting reality, trying to force it into their plan. You'll get there faster if you just let go and do the work. --- # Not My Circus, Not My Monkey URL: https://niranta.blog/quotes/not-my-circus-not-my-monkey/ Category: Quotes Published: October 24, 2025 Read time: 5 min Words: 731 There's a Polish proverb that carries more wisdom than most business books: "Not my circus, not my monkey." In a world obsessed with having opinions about everything, with taking stands on every issue, with being seen as someone who cares about all things, this simple phrase is revolutionary. It's not about apathy. It's about boundaries. 01 The Cost of Everyone's Circus We live in an age where outrage is currency. Where commenting on every situation, every decision, every drama unfolding in our periphery is expected. The algorithm rewards us for it. Our networks celebrate us for it. We feel virtuous for it. But something is stolen from us in the process: our peace. The monkeys multiply. The circuses expand. Your attention fragments further. And at the end of it, you're responsible for problems you never chose, advice nobody asked for, and outcomes you can't control. 02 What Belongs to You Your circus is the work that belongs to you, the problem you've been asked to solve, or the responsibility that came to you by design, not accident. Everything else is theater. This doesn't mean indifference. It means discernment. It means asking: "Is this actually mine to carry?" before you pick it up. Because every circus you adopt, every monkey you take on, is time you're not spending on what actually matters. It's attention you're not giving to the people who depend on you. It's energy you're not investing in becoming better at what you're meant to do. 03 The Quiet Power of Staying in Your Lane The most effective people I know share something in common: they've decided what circus is theirs. And they've gotten ruthless about everything else. They don't comment on every crisis. They don't take stands on every issue. They don't feel obligated to have an opinion about every situation. This makes them seem, to the untrained eye, indifferent. Uninformed. Detached. They're actually the most focused people in the room. 04 When to Care This is where nuance matters. You're not building a fortress of indifference. You do care. You do help. You do show up. But you do it intentionally. When someone in your actual life asks. When it's adjacent to what you're building. When you have capacity that won't dilute what you've committed to. The filter is simple: Is this mine to own? If the answer is yes, it's your work, your responsibility, your domain, then you bring your full attention. You don't half-step. You don't resent it. If the answer is no, you let it go. Without guilt. Without explanation. 05 The Freedom on the Other Side Once you've drawn this line, something unexpected happens. You feel lighter. More focused. More capable. The monkeys that are actually yours get better attention. The circus that belongs to you runs better. You have space to think, to create, to improve. And paradoxically, you become more useful to others, not because you're dabbling in everything, but because you're excellent at the things you've chosen to own. So ask yourself: What circus is actually yours? Which monkeys did you really agree to manage? And which ones have you been carrying out of guilt, obligation, or the illusion that you should have an opinion about everything? It's time to put some of them down. --- # Slow Is Smooth, Smooth Is Fast URL: https://niranta.blog/quotes/slow-is-smooth-smooth-is-fast/ Category: Quotes Published: October 24, 2025 Read time: 6 min Words: 721 There's a principle from military and tactical training that sounds backwards at first: "Slow is smooth, smooth is fast." It means that rushing creates mistakes. Mistakes require corrections. Corrections take time. So the person who moves deliberately, with precision, often arrives faster than the person who sprints and stumbles. Speed comes from not having to redo work. It comes from not having to backtrack. It comes from moving with intention, not panic. 01 The hidden cost of rushing When you rush, you miss things. You make assumptions. You cut corners. You tell yourself you'll fix it later. But later, you're already onto the next thing. The mistake compounds. It infects other work. The person who moves fast and sloppy spends their time managing chaos. They're constantly fixing things that break. They're constantly explaining why something doesn't work the way they promised. They're constantly behind, even though they're always moving. And everyone knows it. They know the work is sloppy. They know there's debt being built. They just don't want to admit it. 02 Smooth is about precision Smooth means you've thought through the steps. You know what comes next. You've eliminated the wasted motion. There's no jerking, no sudden changes, no corrections mid-flow. A surgeon moves smoothly. A pianist plays smoothly. They're not going slower than the person who's rushing, they're actually faster. But their speed comes from having already done the thinking. The execution is just execution. This is why preparation matters more than talent. Preparation is what gives you smoothness. It's what lets you move fast without looking panicked. 03 The meditation of precision Precision requires attention. When you're moving with intention, you can't be distracted. You can't be half-present. You have to be here, now, thinking about what you're doing. This is actually restful, even though it sounds like work. There's no mental noise. No second-guessing. No anxiety about whether you're doing it right. You're just doing it, fully. This is why people who work with discipline often seem calmer than people who work in chaos. It's not that they're naturally calm. It's that their work gives them practice in presence. 04 In a world of chaos, smooth is radical Everything around you is trying to make you rush. The culture wants fast. The metrics want fast. The social media feed wants fast. Everyone is always in a hurry and always falling behind. But the person who chooses to move smoothly, deliberately, with precision, that's radical. That's countercultural. That's someone who's not playing the game everyone else is playing. And because everyone else is chaotic and reactive, the smooth person stands out. Their work is cleaner. Their presence is calmer. People trust them more. They actually get more done. 05 Where to apply this This isn't about moving slowly in absolute terms. It's about the ratio of deliberation to action. It's about moving at a pace where you can maintain quality. In a negotiation, smooth is better than aggressive. In writing, smooth is better than prolific. In building a company, smooth is better than explosive growth. In relationships, smooth is better than drama. The question isn't "How fast can I go?" The question is "How fast can I go while maintaining precision?" And the answer, almost always, is faster than you think. --- # On Silence, Grief, and Finding Your Voice Again URL: https://niranta.blog/personal/hello-world/ Category: Personal Published: October 23, 2025 Read time: 8 min Words: 966 There are moments in life when words fail us. Not because we lack the vocabulary, but because the weight of what we carry makes even the simplest sentence feel impossible. There are moments in life when words fail us. Not because we lack the vocabulary or the ability to articulate, but because the weight of what we carry makes even the simplest sentence feel impossible. For me, that moment came in January 2025, when my brother passed away. I stopped writing. Not intentionally, not as a conscious decision, but because the part of me that had always found clarity in words suddenly found only silence. The cursor blinked on empty pages. Ideas that once flowed freely now felt trivial, disconnected from the profound shift happening inside me. How could I write about strategy, systems, or solutions, or the farmhouse I am building, when I was struggling to make sense of the most fundamental questions about life, loss, and meaning? For months, I told myself I would write "when I was ready." But grief doesn't work that way. There is no finish line, no moment when you wake up and feel whole again. Recovery isn't a destination, it's a slow, uneven process of learning to carry what you've lost while still moving forward. And it's hard. Till date I sense him around me every hour, and minute of the day. I haven't fully recovered. Not by miles, I'm not sure anyone ever does, nor will I. But I've realized that waiting for complete healing before returning to the work I love means waiting forever. So here I am, writing again. 01 what grief teaches you about clarity In the immediate aftermath of loss, everything feels both urgent and meaningless. The emails that once seemed important now feel absurd. The phone calls, the WhatsApp messages, the meetings, the deadlines, the metrics, they all fade into background noise. What remains is startlingly simple: the people you love, the work that gives you purpose, the problems worth solving. This clarity is brutal. It doesn't arrive gently or gradually. It arrives all at once, like a spotlight in a dark room, illuminating everything you've been pretending not to see. And once you've seen it, you can't unsee it. I've spent nine months struggling in a dark room trying to understand what it means for me, and why I do anything in the first place. 02 the silence that followed In the weeks after my brother died, people would ask how I was doing. I would say "fine" or "okay" or "taking it day by day." These were not lies, exactly, but they weren't true either. They were the words you say when the real answer is too complicated, too raw, too much for casual conversation. What I couldn't do was write. Writing, for me, has always been how I think. It slows me down. I don't write to communicate ideas I already have, I write to discover what I think in the first place. The act of putting words on paper forces clarity by slowing your thinking. It exposes gaps in logic, reveals assumptions I didn't know I was making, and transforms vague intuitions into concrete arguments. 03 the weight of triviality One of the strange things about grief is how it makes everything else feel trivial. Before my brother died, I cared deeply about my work. During his year-long fight and after he died, it all felt absurd. Who cares about anything when people are dying? But here's what I've learned: the feeling that everything is trivial is itself a kind of distortion. It's grief's way of protecting you, of narrowing your focus to only what's essential. My brother worked in technology, building systems that helped people do their jobs better. He thought of it as craft, the work of understanding a problem deeply enough to build something elegant that solves it. And he was right. The work isn't trivial. What's trivial is the noise around the work. Grief strips that away. What remains is the question: are you solving problems that matter? 04 the quiet comeback I didn't choose to write again, it just crept back in. Slowly, almost invisibly. It began with notes. Not essays. Fragments. Things I noticed while reading, questions I couldn't answer, odd connections between ideas. The first drafts were awful, stiff, self-conscious, trying to sound like an old version of me. Eventually I stopped chasing a voice and just said what I actually thought. Less pretense, more honesty. 05 what comes next I don't know what comes next. I don't have a plan or a roadmap or a clear vision of where this is going. I'm writing again because I need to make sense of some of the questions I have on my mind. And because I think the answers matter. I'm not fully healed. I'm not sure I ever will be. But I'm here, ain't I, and I'm writing, and that feels like enough for me now. ---