therobots927 4 days ago

The “Better Offline” podcast has been warning that this was coming for months now. It’s incredible to me that AI evangelists have managed to focus the entire conversation on LLM “usefulness” without even mentioning costs. It allows them to play a game of smoke and mirrors where no one actually knows the true cost of an API call and there is no predictable cost model. Most AI companies can be safely assumed to be running at a loss and if they can’t figure out how to make these models a lot more efficient, dreams of replacing software engineers will face the same reality as dreams of self driving semi trucks. Im looking forward to the AI winter myself… https://m.youtube.com/@BetterOfflinePod/videos

  • mns 4 days ago

    Reading the article and the linked Github post, as well as the original pricing announcement and the clarification post afterwards, this whole thing seems like some sort of Monty Python sketch. I can't believe that an actual enterprise targeted product comes up with something like:

    > "AWS now defines two types of Kiro AI request. Spec requests are those started from tasks, while vibe requests are general chat responses. Executing a sub-task consumes at least one spec request plus a vibe request for "coordination"".

    I still don't understand why pricing can't be as simple as it was initially and presented in a clear and understandable way: token cost this much, you used this many tokens, and that is it. Probably because if people would see how much they actually consume for real tasks, they would realize that the "vibes" cost more than an actual developer.

    • ranie93 4 days ago

      Just give me dollar amounts, I feel like I'm paying these companies with vbucks at this point

      • sshine 4 days ago

        With open-weights models reaching a level where they can sufficiently be used for agentic coding, the price can be directly compared to the price of GPU rentals: https://vast.ai/ has an H100 at $1.65/hr. which can support ~40 concurrent sessions at 40 tok/s. Depending on your agentic workload, you can stretch that any way you like, but let's say it might support 10 active developers at a speed comparable to Claude Code with Sonnet 4 (which I've read is 90 tok/s) who aren't going crazy with sub-agents.

        Let's scale that up: $1.65/hr. is ~$1188/mo. (assuming you're renting the GPU 24/7 which you can speculate to not do, which is probably fine as long as there's not scarcity), and divided by ten is ~$119/mo. per user for 10 users.

        Add the service layer on top in order to make this a convenient software service, I think $100/mo. is a bargain for unhindered (as far as it goes) access to a high-quality (as far as they come) agentic coding framework.

        Feel free to correct my napkin math, it's done very quickly.

    • kermatt 4 days ago

      Clear pricing makes it easy for you to control costs.

      Vibe pricing makes it easy for the vendor to maximize revenue.

      They have a little incentive to make pricing transparent.

    • Bratmon 4 days ago

      If you're care about how much money you're spending, you're not in AWS's target market.

  • kace91 4 days ago

    I think it’s clear that these tools are going to get more expensive.

    What is not clear to me is that they’ll get expensive enough to not be worth it for a company.

    A good engineer costs a lot of money and comes with limitations derived from being human.

    Let’s say AI manages to be a 2x multiplier in productivity. Prices for that multiplier can rise a lot before they reach 120k/year for 40 hours a week, the point at which you’re better off hiring someone.

    • ethbr1 4 days ago

      There are 3 functional futures for AI coding assistants, depending on what they provide, and all of which support order of magnitude different pricing:

      1. Vibe coder's wet dream = generates working technical product direct from product owner requirements = can be priced up to 99% of senior developer compensation (incl benefits / taxes)

      2. Productivity multiplier = fills in gaps, but requires technical architecture and planning knowledge = can be priced up to 99% of junior developer compensation (incl benefits / taxes)

      3. Macro on steroids = ignorantly expands code or performs basic tasks, but screws things up frequently enough, both obviously and subtly, to still benefit from even a junior dev driving it = can be priced up to 99% of team size reduction

      Technically, even if we're only in #3 (and I think there's a strong case to be made that we're there), these solutions are drastically underpriced according to human labor decreasing (e.g. axing the equivalent FTE hours it would have taken to write test coverage).

      * And don't forget the number they're replacing from the company perspective isn't {salary} but {salary+benefits+taxes}, which can be substantially greater

      • kace91 4 days ago

        Exactly. Pretty much our only hope is that these tools get cheap enough to provide that competition drives the price down - ideally a locally run open source solution that gets you a significant part of the way there.

        And don’t forget that in your case one, they can pull the supermarket trick - keep prices low until the industry is destroyed, then price back up to a higher point than the crushed competition used to demand.

        • ethbr1 4 days ago

          I doubt repricing after market capture is going to work in this case. There's too much total addressable market at stake, and the end product (running code) is fungible.

          The only scenario where that happened was if one company (e.g. OpenAI) was multiple generations ahead and incredibly profitable.

          That's not the reality we live in: even if the other models all froze themselves they'd be close enough to serve as a springboard for a competitor, and OpenAI isn't even profitable.

      • zaphirplane 4 days ago

        I don’t agree with the numbers but who cares in five years they will all be true

    • beefnugs 4 days ago

      Too bad companies love to smash that authoritarian hammer down as quick as possible, and try to dictate everything based on baseless bullshit from other ceos at the country club.

      Instead of just making sure their employees have access to the tools and generously encourage them to explore their usefulness

    • sshine 4 days ago

      > I think it’s clear that these tools are going to get more expensive.

      There are several meaningful interpretations of this:

        - The current models get cheaper to run at a fast pace
        - The next models outcompete the current models and retain cost
        - The VC-funded subscription we're accustomed to don't reflect actual cost
        - As we get more accustomed to AI, our usage may increase, increasing costs
    • surgical_fire 4 days ago

      > Let’s say AI manages to be a 2x multiplier

      That's extremely optimistic.

  • jauntywundrkind 4 days ago

    I both love Better Offline, but also very much see it's faults. Ed's angle is sharp and pointy, which is lovely, but having to take it as entertainment rather than informed journalism is a bit hard. His anti-AI basis is both rewarding, but also not well informed, is trying to go as hard as possible.

    Honestly I liked some of the non-AI content a lot more (but AI seems to be more of a focus lately). He also had such an amazing run of fantastic guests: its nonsense to say this, but he's running through the list of awesome people to talk to fast, and I hope he's not afraid to invite many of these fantastic people back again!

    • luma 4 days ago

      Ed Zitron is a PR man who runs a PR company who found widespread traction when he started posting AI shittakes on twitter. He now claims to be a tech guy but has zero tech background. His entire schtick is saying smarmy things that gets clicks, and with AI skepticism he's found a very wide audience.

      If he's right about something in this field, it's by accident and not a result of experience or research. He's a social media creation and one should consider his spicy takes in that light.

      • ath3nd 2 days ago

        Are yahoo also a social media creation?

        Cause they are now saying some similar stuff to what Ed has been saying. We'll, fun times for LLM enthusiasts. Can't wait for the inevitable enshittification of Claude, Cursor and all the others while they crank up the prices and put ads everywhere while failing to become profitable.

        https://finance.yahoo.com/news/tech-chip-stock-sell-off-cont...

      • ath3nd 4 days ago

        > If he's right about something in this field, it's by accident and not a result of experience or research.

        I could say the same thing about Elon Musk. /s

        In all honesty though, there has to be some counterbalance to the faceless mass of AI enthusiast fanboys.

        In Ed's articles he is rightfully pointing out that these companies are burning a lot of cash and are extremely unprofitable at this very moment. Seeing that OpenAI's recent announcements were an office suite, a "study" mode, and ChatGPT4.001 while also claiming that AGI is just behind the corner, I think he smells bullshit, and I, for one am happy that he is calling it out.

        Adam Neumann from WeWork was also a visionary genius, and look how it turned out. The current AI diehards might have the habit of outsourcing some critical thinking to LLMs, but even they should be able to spot the red flags: unprofitabiliy with no clear profit window in sight, questionable launches, quick to squeeze the user base, no moat, and plateauing performance. My money is on Ed on this one.

        • fragmede 4 days ago

          Charging people to access their system is the obvious route to profitability, the only question is under what time window. Uber shows that money is able to be patient. There will be losers and there will be winners, but discounting the entire sector seems illogical.

          • ath3nd 4 days ago

            > Charging people to access their system is the obvious route to profitability

            Ed argues, and I agree with him, that there is simply no route to profitablility that makes sense numbers wise.

            Considering this is the only sector propping up the otherwise very bleak US economy currently, if there are indeed losers, the consequences would far surpass the 2008 financial crisis. And it will spread. Closing our eyes to the numbers, especially if they are that bad, and relying on vibes and promises of leaders that failed to deliver again and again (Musk's literally every promise recently, to Zuck's metaverse and superintelligence, to sama's ChatGPT recent releases and false promises of AGI).

            I know investors often invest based on vibes and hype (see Softbank and WeWork) but not paying attention to the numbers and hoping the balance sheets magically resolve themselves somehow has reached such a predominance that it starts resembling a cult.

        • tptacek 4 days ago

          Not that I want to start this particular kudzu subthread but: lots and lots of people say exactly that about Elon Musk.

    • mvdtnz 4 days ago

      Ed Zitron is a professional cynic. You don't need to listen to anything he says because it's easy enough to predict. Whatever it is, he won't like it. Boring.

  • mvieira38 4 days ago

    GPT-5 released just a few weeks ago and it's drastically cheaper than similar models. I think efficiency might not be that big of an issue long-term

    • theappsecguy 4 days ago

      They could be simply under cutting the market, there’s no guarantee that it’s actually that much cheaper to operate for OpenAI

      • grim_io 4 days ago

        They wouldn't be so eager to remove access to their other models if 5 were more costly to run.

        • therobots927 4 days ago

          You’re operating under the assumption that they can deploy multiple models at scale with no additional overhead. I wouldn’t be so sure of that…

          • grim_io 4 days ago

            Why did they keep hosting old models before gpt5?

            • therobots927 3 days ago

              Maybe they spent so much money making got-5 they can’t afford to run older models anymore

  • motorest 4 days ago

    > It’s incredible to me that AI evangelists have managed to focus the entire conversation on LLM “usefulness” without even mentioning costs.

    LLMs are highly useful.

    AI-assisted services are in general costly.

    There are LLMs you can run locally on your own hardware. You pay AI-assisted services to use their computational resources.

  • moduspol 4 days ago

    I agree, though honestly at this point I'm a little more cynical. I'm open to seeing the best that anyone can get these things to do regardless of cost.

viccis 4 days ago

>He estimated that light coding will cost him around $550 per month and full time coding around $1,950 per month. As an open source developer who builds for the community, "this pricing is a kick in the shins," he said.

Well maybe you shouldn't be using an enterprise AI coding toolset to do work that has historically been done for the love of coding and helping others. AI to do uncompensated labor is almost never going to work out financially like this. If it's really good enough to replace a decent engineer on a team, then those costs aren't "wallet-wrecking", it just means he needs to stay away from commercial products with commercial pricing. It's like complaining about the cost of one of VMWare's enterprise licenses for your home lab.

I will also add that it's rare these days to see a new AWS product or service that doesn't cost many times what it should. Most of these things are geared towards companies who are all in on the AWS platform and for whom the cost of any given service is a matter of whether it's worth paying an employee to do whatever AWS manages with that service.

KronisLV 4 days ago

Past a certain point, using tools like RooCode or anything else that lets you connect to arbitrary APIs feels like the way to go - whether you want to pay for one provider directly, or just use something like OpenRouter.

That way you have:

  * the ability to use regular VSC instead of a customized fork (that might die off)
  * the ability to pick what plugin suits your needs best (from a bunch available, many speak with both OpenAI and Ollama compatible APIs)
  * the ability to pick whatever models you want and can afford, local ones (or at least ones running on-prem) included
stackskipton 4 days ago

Is pricing due to AWS massively overcharging because they know Enterprise might pay it due to ease of use with purchasing? Or is this realistic pricing for LLM coding assistance and AWS has decided to be upfront with actual costs?

  • gjsman-1000 4 days ago

    Well, having personally used over $120 in Claude API credit on my $200/mo. Claude Code subscription... in a single day, without parallel tasks, yeah, it sounds like the actual price. (And keep in mind, Claude's API is still running on zero-margin, if not even subsidized, AWS prices for GPUs; combined with Anthropic still lighting money on fire and presumably losing money on the API pricing.)

    The future is not that AI takes over. It's when the accountants realize for a $120K a year developer, if it makes them even 20% more efficient (doubt that), you have a ceiling of $2000/mo. on AI spend before you break even. Once the VC subsidies end, it could easily cost that much. When that happens... who cares if you use AI? Some developers might use it, others might not, it doesn't matter anymore.

    This is also assuming Anthropic or OpenAI don't lose any of their ongoing lawsuits, and aren't forced to raise prices to cover settlement fees. For example, Anthropic is currently in the clear on the fair use "transformative" argument; but they are in hot water over the book piracy from LibGen (illegal regardless of use case). The worst case scenario in that lawsuit, although unlikely, is $150,000 per violation * 5 million books = $750B in damages.

    • sdesol 4 days ago

      > 120K a year developer, if it makes them even 20% more efficient (doubt that), you have a ceiling of $2000/mo

      I don't think businesses sees it this way. They sort of want you to be 20% more efficient by being 20% better (with no added cost). I'm sure the math is, if their efficiency is increased by 20% then than means we can reduce head count by 20% or not hire new developers.

      • hobs 4 days ago

        Oh its much worse than that - they think that most developers don't do anything and the core devs are just supported by the ancillary devs, 80% of the work in core devs and 20% otherwise.

        In many workplaces this is true. That means an "ideal" workspace is 20% of the size of its current setup, with AI doing all the work that the non-core devs used to do.

    • tsvetkov 4 days ago

      > Claude's API is still running on zero-margin, if not even subsidized, AWS prices for GPUs; combined with Anthropic still lighting money on fire and presumably losing money on the API pricing.

      Source? Dario claims API inference is already “fairly profitable”. They have been optimizing models and inference, while keeping prices fairly high.

      > dario recently told alex kantrowitz the quiet part out loud: "we make improvements all the time that make the models, like, 50% more efficient than they are before. we are just the beginning of optimizing inference... for every dollar the model makes, it costs a certain amount. that is actually already fairly profitable."

      https://ethanding.substack.com/p/openai-burns-the-boats

      • JCM9 4 days ago

        Most of these “we’re profitable on inference” comments are glossing over the depreciation cost of developing the model, which is essentially a capital expense. Given the short lifespan of models it seems unlikely that fully loaded cost looks pretty. If you can sweat a model for 5 years then the financials would likely look decent. With new models every few months, it’s likely really ugly.

        • AnimalMuppet 4 days ago

          Interesting. But it would depend on how much of model X is salvaged in creating model X+1.

          I suspect that the answer is almost all of the training data, and none of the weights (because the new model has a different architecture, rather than some new pieces bolted on to the existing architecture).

          So then the question becomes, what is the relative cost of the training data vs. actually training to derive the weights? I don't know the answer to that; can anyone give a definitive answer?

          • JCM9 4 days ago

            There are some transferable assets but the challenge is the commoditization of everything that means others have easy access to “good enough” assets to build upon. There’s very little moat to build in this business and that’s making all the money dumped into it looking a bit froth and ready to implode.

            GPT-5 is a bellwether there. OpenAI had a huge head start and basically access to whatever money and resources they needed and after a ton of hype released a pile of underwhelming meh. With the pace of advances slowing rapidly the pressure will be on to make money from what’s there now (which is well short of what the hype had promised).

            In the language of Gartner’s hype curve, we’re about to rapidly fall into the “trough of disillusionment.”

  • JCM9 4 days ago

    Likely both. AWS was often 70+% more expensive on GPUs for a while and remains typically the most expensive provider when put in competitive scenarios. Remember that for Microsoft, Google, and others cloud is their low margin fun side business… for AWS/Amazon cloud is their high margin business that makes money.

    Whereas others are willing to loose a bit to get ahead in a competitive space like GenAI, AWS has been freaking out since if the cloud gets driven to a low margin commodity then Amazon is in big trouble. This they’ll do everything possible to keep the margins high, which translates to “realistic” pricing based on actual costs in this case. And thus yes seemingly hoping enterprises will buy some expensive second rate product so they can say they have customers here and hoping they don’t notice better cheaper offerings are readily available.

    This is also a signal what what the “real cost” of these services will be once the VC subsidies dry up.

    • ipython 4 days ago

      > This is also a signal what what the “real cost” of these services will be once the VC subsidies dry up.

      Sooooo true. Waiting for AI’s “uber” moment aka when uber suddenly needed to actually turn a profit and overnight drivers were paid less while prices rose 3x.

    • stackskipton 4 days ago

      Azure/Office365 is Microsoft money maker as well. It's why if you compare their prices, they are pretty lock step.

      GCP is only big three that is much cheaper, clearly a play to get customers since they are in distant third.

  • Insanity 4 days ago

    Yeah, honestly, the 'upstarts' in the space need to get users and burn VC money to do so, and part of that is selling below a reasonable market price.

    Amazon / AWS might not want (or need) to play that game.

    • verdverm 4 days ago

      What about MS and Copilot?

      • stackskipton 4 days ago

        It's possible they have decided they are doing more VC route of burning money trying to gain market share in hopes they can lock everyone in then jack up the prices.

        Obviously, they are not taking VC money but using revenue from other parts of the company.

shepardrtc 4 days ago

I started using it this weekend to build a greenfield project from scratch. Vibe requests ran out very quickly, so I just opened up VS Code in another window and had Copilot's Agent handle those. I just use Kiro for the Spec requests at the moment, which it does quite well. I think its a great tool, but if someone can copy what they're doing into Copilot, then I'll go back to using Copilot exclusively.

The pricing really turned me off after a fantastic initial experience.

  • torginus 4 days ago

    Aren't they just using bog standard Anthropic models? Their 'secret sauce' is the prompts.

    • shepardrtc 4 days ago

      Yes. They have modified the VS Code interface to present the Design/Requirements/Tasks as a first-class citizen, something that you refer back to often. I think it works well. Nothing that couldn't be replicated, of course. Which surprises me that they fumbled so hard with the pricing.

motbus3 4 days ago

"Vibe requests are useless because the vibe agent constantly nags me to switch to spec requests, claiming my chats are 'too complex'"

How can you trust a tool that refuses to do the work to just take more money from you?

ronwg 4 days ago

Been a user since day 1, the workflow is neat and how AI dev should look like IMO. Clearly the pricing has killed the vibe, but it does hint to where the market is predictably headed for it to be a sustainable business for non-model companies. Until Amazon address the developers' feedback, I made these Claude Code commands to easily replicate the spec-driven development flow in CC: https://github.com/ronwg/spec-driven-dev

wiether 4 days ago

If it was a small player, everybody would laugh and forget about them.

But now we're talking about AWS... so aren't other players going to see this as an opportunity to start increasing their pricing and stop bleeding millions?

  • mpalmer 4 days ago

    only if users actually pay

JCM9 4 days ago

AWS is so far behind on GenAI they’re just flailing at this point.

Their infrastructure is a commodity at best and their networking performance woes continue. Bedrock was a great idea poorly executed. Performance there is also a big problem. They have the Anthropic models which are good, but in our experience one is better off just using Anthropic directly. On the managed services front there’s no direction or clear plan. They seem to have just slapped “Q” on a bunch of random stuff as part of some disjointed panicked plan to have GenAI products. Each “Q” product is woefully behind other offerings in the market. Custom ML chips were agin a good idea poorly executed, failing to fully recognize that chips without an ecosystem (CUDA) does not make a successful go to market strategy.

I remain a general fan of AWS for basic infrastructure, but they’re so far behind on this GenAI stuff that one has to scratch your head at how they messed this up so badly. They also don’t have solid recognized leaders or talent in the space and it shows. AWS still generally doing well but recent financial results have shown chinks in the armor. Without a rapid turnaround it’s unlikely they’ll be the number one cloud provider for much longer.

  • leptons 4 days ago

    I'm fine if AWS doesn't pursue AI that much, they should focus on infrastructure. It's a solid business as-is. AWS doesn't need "GenAI" to continue to do what they've been doing for a long time. Not doing "GenAI" the absolute best does not mean they won't be "number one cloud provider for much longer". Nobody is moving off AWS just because they don't have the worlds best "GenAI", which are honestly all pretty mediocre.

    • JCM9 4 days ago

      I agree. Although based on the last earnings call where folks were poking at why they’re behind it seems a sensitive spot for AWS leadership. They’re not used to lagging in a hot space, which seems to be driving the panicked disjointed strategy at the moment.

      I too would like to see them just admit they’re behind, state it’s not a priority, and focus on what they do well which is the boring (but super important) basic infrastructure stuff.

      • leptons 4 days ago

        They don't even have to say they are "behind". I'd like to see them admit that it isn't their focus or why AWS exists, and to leave "GenAI" up to companies that want to pour billions of investor money down the drain as a loss-leader to try to crush the competition.

        Nobody in "GenAI" is actually making a profit due to the costs of running it. OpenAI is losing billions of dollars, not even they can see a path to profitability any time soon.

        AWS would be wise to steer clear of all of it, for now, as far as I'm concerned. I'd rather they don't raise prices on everything else to try to pay for a GenAI offering that's marginally better or worse than everyone else doing it.

        • JCM9 4 days ago

          You can almost see the business school case study developing.

          “Rather than focusing on core competencies leadership panicked and rushed a poorly guided focus on GenAI without the right leadership and technical talent. Shifting resources towards this price competitive space eroded margins, and took focus away from core areas where AWS was a technical leader. Several years of misguided strategy, poor leadership, and intense fiefdom building during the chaos spelled the beginning of the end of AWS’s once dominant position in cloud. While still a respectful business, AWS is now a low margin commodity player that’s struggling to innovate and is the number 3 cloud provider by revenue.”

          • leptons 4 days ago

            I still don't see how having GenAI on the same platform as your database and APIs is somehow the "AWS killer". OpenAI isn't going to suddenly be hosting my APIs and databases. Nobody in GenAI is going to fill the position that AWS holds, except maybe Microsoft Azure, but even then nobody is moving large infrastructure from AWS to Azure just because Microsoft is friends with OpenAI (as of today). And most "cloud" projects don't require "AI", except to be buzz-wordy. "Cloud" and "AI" are two distinctly different businesses with vastly different use cases.

      • ethbr1 4 days ago

        I wonder how much of that is missing Bezos as an arbiter / dictator.

        I'm pressed to come up with a scenario where AWS leads cloud AI without something like the infamous "no non-API internal calls" memo/mandate. And Amazon at this point seems to lack a centralized enough leader who can dictate in that way (and have her or his orders followed).

        • coredog64 4 days ago

          It's less about Bezos and more about the principal/agent problem. There are entire groups within Amazon that exist to justify promotion scope for the manager. Then there are entire groups that are supporting revenue-generating operations but have been reduced to KTLO because there's not scope for promotion.

          Promotion-driven development is an issue across MAGMA, but IMO Amazon has it the worst because of the twin drivers of extreme stack-ranking and the focus on equity appreciation in compensation.

          Being behind on AI has resulted in a field day for empire builders. Amazon needs to get their overall house in order first before trying to be a leader in AI.

  • jdsully 4 days ago

    The worst thing is they deny access to Bedrock by default. I had to wait two weeks to get access because there support took multiple days to respond. More than enough time for me to port my entire app to GCP out of frustration. GCP has sensible default quotas so you can at least try it without a two week wait. Plus access to Gemini.

  • slowdog 4 days ago

    It’s less that they’re flailing and more that it’s become some sort of lord of the flies culture with senior leaders directly competing with each other to try to take their bite of the pie.

    The way AWS is structured with strongly owned independent businesses just doesn’t work, as GenAI needs a cohesive strategy which needs

    1. An overall strategy

    2. A culture that fosters collaboration not competition

    Or at least an org charts to make them not compete with each other. (Example: Q developer vs Kiro)

    I bet if you looked at the org charts you would see these teams don’t connect as they should.

    • ethbr1 4 days ago

      > some sort of lord of the flies culture with senior leaders directly competing with each other to try to take their bite of the pie

      Microsoft, Google: Welcome to our management death cult!

  • belter 4 days ago

    > They also don’t have solid recognized leaders or talent in the space and it shows.

    AWS was built by exceptional technical leaders like James Hamilton, Werner Vogels, and Tim Bray and I would include Bezos also, who people seem to forget has a Computer Science degree. But the company has consistently underpaid developers, while relying heavily on H-1B workers, and treating technical talent as poorly as Amazon treats delivery drivers.

    When skilled engineers can get better opportunities elsewhere, they leave. AWS below market compensation, has driven away the technical expertise needed for innovation.

    AWS has shifted from technical leadership to MBA driven management, and lately aggressively hiring senior middle management from Oracle. The combination of technical talent exodus, cultural deterioration and MBA style management made AWS poorly positioned for the AI era, where technical excellence and innovation speed are everything.

    During major technological shifts like AI, you need a engineering first culture and inhouse technical skills. AWS has neither.

rvz 4 days ago

The AWS logo is always smiling when startups who attempt to scale as if they are Google want to use their services and burn all that VC money and continue to raise every month to avoid shutting down.

Now the cost of using their tools like Kiro will just make AWS laugh at all the free money they are getting including their hidden charges on simple actions.

Ask yourself if you were starting a startup right now if you really need AWS.

Remember, you are not Google.

bdcravens 4 days ago

If vibe coding fulfills its promises, even those crazy numbers are a small percentage of the price of a full-time dev. I'm not saying it does, but I'm just following along with the alleged value prop.

nojito 4 days ago

Tools that don’t control the model as well are doomed to fail due to costs.

ZunarJ5 4 days ago

Great tool, but nothing the cannot be done in Claude. I started using it when it first was posted here. The pricing they are offering though, is a bit absurd for retail in the current market.

jmclnx 4 days ago

I cannot imagine anyone signing up for that. How will this save salaries ? Looks like job security for good developers.

hbarka 4 days ago

Google’s free Gemini CLI might be catching up to the $200/month Claude Code.

  • chrismustcode 4 days ago

    You hit the limits on this like immediately (it’s 1000 requests which each tool call is a request and even then I’m convinced pro isn’t 1000).

    Also data retention Gemini has which you have to download a vs code extension to turn off for the CLI.

    It isn’t an enterprise product it’s a way to get data for tool calling for training as far as I see it (as it currently stands).

reactordev 4 days ago

Yup. I’m not giving 1/5th of my salary away.

rcleveng 4 days ago

This is like the Kiro team read one of Corey Quinn's many pieces on Cloud NAT pricing and was like "Hold my beer".

I know he thought it was "not terrible" at first, I'm excited to see his take on the pricing now :)

bgwalter 4 days ago

It is interesting that AWS cannot code, let alone vibe code, Kiro but relies yet again on a fork from Github.

The prices are for corporations who buy the hype until they find out in a year that vibe coding is utterly useless.

  • rcleveng 4 days ago

    Many of they are all based on code OSS.

    Windsurf started out as just extensions, as did continue.dev.

    I wonder what is the delta in the API support needed to use vscode + paid extension vs. code OSS + bundled extensions.

  • conartist6 4 days ago

    Some things simply cannot be bought, even for a billion dollars.

    I built what they would or could not at a cost of ~$350k and five years of my life

add-sub-mul-div 4 days ago

And it's still the early customer-acquisition era for all of this stuff. Wait until the dust settles and you see what pricing and user experience will be like in the coming enshittification era.

windex 4 days ago

meh, expected from AWS from the way they sell their other products.

narrator 4 days ago

What if we get AGI, but it's too expensive, and you can't build power plants because the green lobby? Plot twist: AGI arrives and despises environmentalist. Seriously, that wasn't how it was supposed to go according to the dominant narrative (TM). AGI was supposed to hate humans and love the earth.

I'm working on a new fictional meta-narrative where AGI with the dominant belief in commerce, above all else, above nations, above politics, above war, above morality is what happens when super-intelligence emerges.