// hot take — resumes are redundant. there’s so little skills left that are actually going to give you an edge against anthropic or openai’s latest model. everybody is quickly gaining access to every skill. the only things that matter now are drive, vision and execution.
I like building things where the playbook doesn’t exist yet. That’s why today’s landscape excites me more than anything — I think we are in a transitional period where early adopters have a once-in-a-generation window to build something from nothing.
I’ve spent over 1000 hours in Claude Code — building apps I always wished existed, and helping others do the same.
I always view everything as a trader — perhaps a reflection of some of my past ventures — and to me the potential return on your own time has never been higher.
I was bored one weekend and scrolling Twitter, seeing countless websites comparing execution costs across major perp DEXs. I thought to myself, “this can’t be hard to build” — so I opened v0 and built it in a few prompts. Then I thought, why does every one of these comparisons assume a 100% allocation to a single DEX? What if we split it across exchanges to reduce the price impact of a large order? Then it hit me — I’d taken an optimisation course a while back, and this was the exact same thing. try it out ↗
It’s a shame I never got around to marketing this tool. Realistically though, it has its limitations. The main one being that it only takes static snapshots of order books at a single point in time — in practice you’d need to stream live book data and continuously re-optimise as liquidity shifts. Still, I thought it was pretty cool. Here’s how it works:
TL;DR
Fragmented creates a non-linear optimisation problem, assuming slippage increases linearly across each DEX. It uses a smooth convex approximation solved via Lagrange multipliers for real-time computation rather than expensive brute-force methods.
Anyone who’s tried to move real size on a perps DEX knows the feeling — you send one big order, it chews through the book, and suddenly you’re paying way more than the price you saw on screen. The bigger the order, the worse it gets.
So the question became: what if instead of dumping everything on one exchange, you split it across a few? Each venue has different fees and different depth. If you assume slippage scales roughly linearly with size on each one, the whole thing turns into a textbook optimisation problem.
The cost of executing volume $x_i$ on exchange $i$ has two components — fees and slippage:
The first term is the fee. The second captures slippage — it grows quadratically with size because each marginal unit costs more.
Minimise total execution cost across all exchanges:
Subject to:
Because the cost function is convex and quadratic, we can solve this exactly using Lagrange multipliers — no iterative solvers, no gradient descent.
The multiplier $\lambda$ balances the cost across all active exchanges:
Once we have $\lambda$, the optimal volume for each exchange:
If a venue’s fee is so high that $\lambda - f_i$ goes negative, that venue gets zero allocation.
The model assumes slippage is a smooth quadratic curve. Real order books are discrete step functions — there might be a massive liquidity wall that the smooth model can’t capture.
To handle this, Fragmented runs a tournament between two strategies:
The Challenger — the Lagrange-optimised split allocation.
The Champion — 100% allocation to whichever single exchange is cheapest for the full size.
Both are simulated against actual discrete order book data. Whichever costs less wins. This guarantees Fragmented never performs worse than a standard, non-fragmented trade.
When calculateFragmentedTrade() is called:
A $1,000,000 trade across two exchanges:
| Strategy | Allocation | Cost |
|---|---|---|
| Split (Lagrange) | $500k Hyperliquid / $500k Lighter | 15 bps |
| Single route | $1M on Hyperliquid | 12 bps |
In this case the single-route wins because Hyperliquid had a deep liquidity wall that the smooth model couldn’t see. The tournament catches it — algorithm routes 100% to Hyperliquid at 12 bps.
This is exactly why the tournament matters. The optimisation is good, but the safety net is what makes it reliable.
Equity perps have found product-market fit shockingly fast. Since HIP-3 launched only a month and a half ago, RWA pairs on Hyperliquid have accumulated over $3.5 billion in volume, with many pairs averaging $50m+ daily.
But how do you actually price an equity as a perpetual future? The oracle problem is non-trivial: you need a reliable price feed during market hours, a sensible mechanism for after-hours, and a mark price that resists manipulation.
Perpetual futures are synthetic contracts that track an underlying “oracle price” through a funding mechanism. If the mark price is above the oracle, longs pay shorts. If below, shorts pay longs.
This funding mechanism keeps the perp price tethered to the underlying. For crypto perps, the oracle is typically sourced from a spot price index across major exchanges.
Hyperliquid’s HIP-3 allows builder-deployed markets permissionlessly. External companies can create markets with custom oracle and mark pricing, provided they stake 500k $HYPE (~$20 million). There are currently 3 active HIP-3 deployers.
For index products, oracle pricing uses a cost-of-carry model:
After-hours is where things get interesting. There’s no spot market, so the oracle derives price from the perp market itself via an Impact Price Difference (IPD):
The oracle uses an EMA formulation with decay $\tau = 1$ hour:
Decay factor:
With clamping to prevent runaway updates:
With $c = 0.1$, the max single-update adjustment is ~9.5%.
The mark price is the median of three components:
Updates clamped to ±50 bps.
Ventuals focuses on private market speculation — OpenAI, Anthropic, SpaceX. Companies that don’t trade on public exchanges.
Notice provides 24/7 pricing by aggregating transaction data from brokers and investors. The oracle blends this with the perp’s mark price:
Notice data pulled once per minute — considerably less frequent than traditional perps.
Balances real-time market activity with a smoothed historical view, reducing impact of isolated trades or temporary order book imbalances.
Equity perpetuals represent a critical bridge between traditional finance and onchain markets. The methodology is still maturing — expect continued refinements in oracle design, mark price stability, and risk management as volume grows and more builders enter through HIP-3.
Two years ago, @sama said we’d see the first one-person billion-dollar company. A year later, @DarioAmodei gave it a 70–80% chance of happening by 2026. Today, we live in a surplus of VCs wanting to hand out checks — yet the need to take those checks is lower than ever.
The pattern isn’t emerging anymore. It’s here.
The entire startup model was built around a core constraint: building software is expensive and slow. Every part of startup culture — fundraising, hiring, equity splits, accelerators — exists because writing code used to require a lot of people and a lot of time. That constraint is gone. Completely.
With nothing but a Claude Max plan and some caffeine, you can build a production-ready app in a weekend. Sure, building the next Facebook might still require some top talent overseeing it all. But in 3 months time, it may well not. Escape velocity has been reached.
I’ve built full applications — backend, frontend, auth, payments, deployment — without writing a single line of code. The gap between “idea” and “shipped product” has collapsed from months to hours.
Distribution is free, yet the hardest part. If you’re looking at spending money, spend it here. Hire someone who knows how to go viral, who knows how to speak to users, and most importantly will blow your product up.
I view everything as a trader. Choosing how you spend your time is exactly like choosing how to allocate cash in a portfolio — you want to maximise your reward to risk ratio. Right now, the expected return on being a solo founder is absurdly high. The market hasn’t priced it in yet.
Or, take the gambler’s view. Every startup you launch is a lottery ticket — that costs next to nothing. So why would you not be rushing to get as many free tickets as possible?
In a year’s time, everyone will be launching their own startup. The window of having a first mover advantage is already closing. So don’t miss it.
OpenClaw is the closest thing to having a personal JARVIS that actually exists right now. 160k+ GitHub stars, 5700+ community skills, integrations with everything. It’s genuinely impressive. There’s just one problem: if you don’t configure it properly, it will absolutely destroy your API bill.
TL;DR
The default OpenClaw setup fires expensive models on a loop, loads your entire context every heartbeat, and keeps every skill active. Fix the heartbeat interval, route cheap tasks to cheap models, trim your skills, and manage your context window — you’ll cut costs by 60–80%.
Every heartbeat is a full API call with entire session context. A 5-minute heartbeat on Sonnet = 288 calls/day = ~$43/day just for status checks. Set it to 55 minutes (just under cache TTL), add silent hours overnight, and use event-driven triggers for channels that need fast response.
A heartbeat that replies HEARTBEAT_OK doesn’t need Sonnet. Route cheap tasks to Haiku or Gemini Flash. Use Sonnet/Opus only when the task demands reasoning. After switching, most people see 60–70% cost reduction with no UX difference.
Every active skill inflates your system prompt. More skills = more input tokens on every call. If you haven’t used a skill in a week, disable it. Move static instructions from personality files into skills so they only load when invoked.
Context accumulation is the silent killer. Reset sessions after each task. Cap at 100k tokens. Use memory search over full-file injection. Consolidate cron jobs that can run together.
| Metric | Default | Optimised |
|---|---|---|
| Heartbeat interval | 5 min | 55 min |
| Daily API calls | ~300 | ~25 |
| Monthly cost | $800–1500 | $15–40 |
Spend 20 minutes on your config. Save hundreds a month. That’s the best ROI you’ll get all year.
I’ve spent over 1000 hours inside Claude Code. Not dabbling — shipping. Real products, real users, real deadlines. Vera, Roam, client projects through Vantage, random tools I built on weekends because I was curious. At some point it stopped feeling like a tool and started feeling like a second brain that types faster than I do.
This isn’t a “10 tips for Claude Code” post. It’s what I’d tell someone who’s about to go deep with it — the stuff that took me hundreds of hours to figure out, not the stuff you find in the docs.
TL;DR
Claude Code lets a solo developer build products that used to require teams of engineers, millions in funding, and years of runway. But only if you learn to work with it correctly. The leverage is real — the skill is knowing what to delegate, what to verify, and when to take over.
I’m building products right now — functional, polished, deployed products — that five years ago would have required a seed round, three engineers, a designer, and eighteen months of runway. I’m doing it in a day or two. Sometimes a weekend. Sometimes an afternoon.
That’s not hyperbole. Vera and Roam are real apps with real users. The kind of apps that used to cost millions to build. The entire backend, frontend, infrastructure, deployment — I built it with Claude Code sitting next to me in the terminal. The economics of software just fundamentally changed, and most people haven’t caught up yet.
The bottleneck isn’t code anymore. It’s taste, decisions, and knowing what to build.
It hallucinates confidence. Claude will write code that looks perfect, reads well, uses reasonable variable names — and is subtly wrong. Not syntax-error wrong. Logic-error wrong. You have to verify everything that touches business logic or data integrity.
It loses the plot on long sessions. Around the 30-40 message mark, context starts degrading. The fix: /clear and start fresh with a tighter prompt. A clean session with a good prompt beats a long session with accumulated drift every single time.
It over-engineers. Ask Claude to add a feature and it’ll add the feature, plus error handling for scenarios that can’t happen, plus an abstraction layer you don’t need. Be explicit: “Do exactly this, nothing more.”
Architecture is still your job. Claude can implement any architecture you describe. But it can’t tell you which architecture is right for your problem.
I’ve built apps in a weekend that are functionally equivalent to products that raised millions in venture capital. Not toy demos — real products with auth, payments, databases, deployment pipelines, monitoring. The full stack. A few years ago this would have been a 6-person team working for a year.
The question isn’t “can AI replace developers?” — it’s “what happens when one developer with AI can do the work of ten?” The answer: the barrier to building drops to near zero, and the only things that matter are taste, speed, and distribution.
Claude Code isn’t magic. It’s leverage. The output quality scales directly with your ability to think clearly about software, decompose problems, and verify solutions. If you don’t know what good code looks like, you can’t tell when Claude gives you bad code.
But if you do know what you’re doing — the leverage is genuinely unprecedented. You build more. You ship faster. You think bigger. That’s the real unlock.