What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
And yet the article does not show the Paris Park Painting?
Going over the 50 bump, and I see myself selling toasts, as being an IC/Architect is no longer valued enough, everyone is expected to be a PM for their agents minions.
The teams get reduced, as now one can do effectively more with less, and in South Europe, in IT there is hardly a place to get a job above 50 years old, unless one goes consulting as the company owner, and even then the market cannot hold everyone.
As kid I have seen this happening, as factory automation replaced jobs in complete villages, the jobs that weren't offshored into Asia or Eastern Europe for clothing and shoes, got replaced with robots.
The few lucky ones were the ones pressing the buttons and unloading trucks.
Likewise a few ones will be lucky AI magicians, some will press buttons, and the large majority better get newer skills beyond computing.
Sugar or alcohol kicks mine into high gear.
I know dozens of people who are in a similar state right now, following the November 2025 moment when Claude Code (and Codex) got really good.
I wouldn't worry about it just yet - this is all very novel, and there's a lot of excitement involved in figuring out what it can do and trying different things.
If you're still addicted to it in three months time I'd start to be concerned.
For the moment though you're building a valuable mental model of how to use it and what it can do. That's not wasted time.
> In the US for example there's still the vague idea that working hard is a virtue of sorts, but there's also an equivalent desire to produce something,
This is the root of a lot of busywork and bullshit jobs as well. People work hard producing something of little and often negative value.
Think of all the effort that goes into making competitive products, from life insurance and cellphone plans to airline tariffs difficult to compare. Compound that with advertising campaigns that don’t inform about the product or service they are selling. All that consumes colossal resources and deliver effectively negative value for society, for a market to be maximally efficient it needs informed consumers that can compare offerings.
You feeling that way is the world telling you you’re doing it wrong.
It is more fun to treat them as coding buddies, usually using them one at a time a time, it is fair to race them at debugging a bug or spend waiting time looking at docs or something.
The real bottleneck is how much you can hold in your head simultaneously to be sure about quality as a moral subject.
...no?
"Your code is slow" is essentially meaningless.
A normal human conversation would specify which code/tasks/etc., how long it's currently taking, how much faster it needs to be, and why. And then potentially a much longer conversation about the tradeoffs involved in making in faster. E.g. a new index on the database that will make it gigabytes larger, a lookup table that will take up a ton more memory, etc. Does the feature itself need to be changed to be less capable in order to achieve the speed requirements?
If someone told me "hey your code is slow" and walked away, I'd just laugh, I think. It's not a serious or actionable statement.
> And the activists are now against it, because the big guys are doing it.
Different activists are different. "Information wants to be free" activists are against different things from "artists trying to make an honest living" activists.
And different big guys are different. A big guy AI company wants different things from a big guy book publisher.
Took me a moment to understand that "Magic Containers" here are a product offered by bunny.net https://bunny.net/magic-containers/
I don't understand. You specifically:
> neglected to include my usual "zero-framework" constraint in the prompt
And then your complaint is that it included a bunch of dependencies?
AI's do what you tell them. I don't understand how you conclude:
> If a simple editor requires 89 third-party packages to exist
It obviously doesn't. Why even bother complaining about an AI's default choices when it's so trivial to change them just by asking?
I'll do even better — here's the original 2022 paper:
https://academic.oup.com/braincomms/article/4/3/fcac089/6563...
The recurring fact of major companies' websites being down for prolonged periods indicates coding errors are not the result of underfunding.
The ungameable statistic is the native born labor force participation rate, which also ticked down: https://fred.stlouisfed.org/series/LNU01373413.
Unfortunately, that figure never recovered from the pandemic. It also never recovered from a major drop after the 2008 recession.
I like to use the term "coding agents" for LLM harnesses that have the ability to directly execute code.
This is an important distinction because if they can execute the code they can test it themselves and iterate on it until it works.
The ChatGPT and Claude chatbot consumer apps do actually have this ability now so they technically class as "coding agents", but Claude Code and Codex CLI are more obvious examples as that's their key defining feature, not a hidden capability that many people haven't spotted yet.
Nothing has changed: the money flows in the same direction as before, that's the constant. The courts are just a diode in a rectifier.
According to the official Terms of Service for account registration, there is only one age requirement: "You must be at least 16 years old or have reached the minimum age applicable in your country in order to legally consent to the processing of personal data."
The wording "in your country" already indicates that the offer is also aimed at users outside the European Union.
Ah ok! I completly forgot about that one, thanks for correcting me.
I ended up doing some digging after your comment, and figure 2 is the one I remebered, yours is also there as figure 1.
https://learn.microsoft.com/en-us/archive/msdn-magazine/2000...
Also I'm doubtful that the needle in the CGM penetrates only "the top skin layer" given that it was around 1 cm long.
You can use git frontends for Jujutsu just fine, I use lazygit a few times a month out of habit, it all works well. I use jjui for the rest of the operations.
Indeed. I have been receiving clearly AI generated job applications out of the blue and they tend to point to their contributions to github projects so some of these must be getting through.
Someone somewhere once decided that it was a great idea to add how many github stars a project that you have contributed to is a useful metric during the hiring process and now those projects get swamped with junk.
Really? The past two weeks I've been writing code with AI and feel a massive productivity difference, I ended up with 22k loc, which is probably around as many I'd have manually written for the featureset at hand, except it would have taken me months.
Up voting, only because it is another native option, away from Atom started trend to ship Chrome alongside every single "modern" application.
> Nit pick/question: The LLM is what you get via raw API call, correct?
You always need a harness of some kind to interact with an LLM. Normal web APIs (especially for hosted commercial systems) wrapped around LLMs are non-minimal harnesses, that have built in tools, interpretation of tool calls, application of what is exposed in local toolchains as “prompt templates” to transform the context structure in the API call into a prompt (in some cases even supporting managing some of the conversation state that is used to construct the prompt on the backend.)
> If you are using an LLM via a harness like claude.ai, chatgpt.com, Claude Code, Windsurf, Cursor, Excel Claude plug-in, etc... then you are not using an LLM, you are using something more, correct?
You are essentially always using something more than an LLM (unless “you” are the person writing the whole software stack, and the only thing you are consuming is the model weights, or arguably a truly minimal harness that just takes setting and a prompt that is not transformed in any way before tokenization, and returns the result after no transformations or filtering other than mapping back from tokens to text.)
But, yes, if you are using an elaborate frontend of the type you enumerate (whether web or CLI or something else), you are probably using substantially more stuff on top of the LLM than if you are using the providers web API.
Players don't want to be continually victimized. That game design drives away all but a tiny minority of players.
> Does anyone offer a live (paid) LLM chatbot / video generation / etc that is completely uncensored?
Probably not, because if it is completely uncensored, it would probably violate the law (in different ways) in every possible jurisdiction.. (Also, one common method of censorship is exclusion of particular types of content from the training set, so to be completely free of that kind of censorship, there would have to be no content intentionally excluded from the training set.)
In general, paid services are censored not only to attempt to meet the laws in all jurisdictions of concern to the provider, but also to try to be safe with regard to the (shiifting) demands of payment processors, and to try to maintain the PR image of the provider.
Asm is simple enough that "mental execution" is far easier, if more tedious, than in HLLs, especially those with lots of hidden side-effects. The concept of a function doesn't really exist (and this is even more true when working with RISCs that don't have implicit stack management instructions), and although there are instructions that make it more convenient to do HLL-style call and return, it's just as easy to write a "function" that returns to its caller's caller (or further), switches to a different task or thread, etc. If you're going to learn Asm, then IMHO you should try to exploit this freedom in control flow and leverage the rest of the machine's ability, since merely being a human compiler is not particularly enlightening nor useful.
As a non logged in user I get tweets in popularity order, which means this weird but tame sexual image comes up third https://x.com/elder_plinius/status/1904961097569890363?s=20
The tweet cited by the article says that the charge investigated was “insult”. There may have been a multistep misunderstanding here, because they seem to have found information (elsewhere, not in the cited tweet), and in loose discussion the “insult” section is within what is broadly described as “defamation laws”, though it is not the specific offense of “intentional defamation” (Section 187) nor is it roughly within the scope of “defamation” as that term is usually used in English (as both “intentional defamation” and the separate offense of “malicious gossip” [Section 186] would be), but its the closest broad category of law with a common name in English.
I feel at home in PL/SQL for similar reasons.
I would rather have seen C# Native AOT for the Typescript 7, alongside Blazor for the playground, but it is what is.
Now when .NET team complains about adoption I get to point out Typescript 7's contribution repo.
And yes I know the reasons, my nick is on the discussions, please also watch the BUILD session where they acknowledge having to rewrite the whole datastructures due to Go's weaker type system.
> We are so desperate to have our voices back that we are willing to leap into the void. We embrace the Web not knowing what it is, but hoping that it will burn the org chart -- if not the organization -- down to the ground. Released from the gray-flannel handcuffs, we say anything, curse like sailors, rhyme like bad poets, flame against our own values, just for the pure delight of having a voice.
What we got was Myspace and its friends. The technology delivered. But once everybody could broadcast, nobody could be heard.
So we got Facebook, which started out as "me, Me, ME" and ended as a broadcast medium with targeted ads.
The best we can do so far is to have lots of small communities, as with Reddit. Or just passively doomscroll the infoblasts.
I never had this issue with Dapper, as others point out, an holding it wrong problem.
Which is why I changed from being on Gonuts during pre-1.0 days to only touch Go if I really have to.
However I would still advocate for it over C in scenarios easily covered by TinyGo and TamaGo.
What's the language you're thinking of that has more of these decisions fixed in the standard library? I know it's not Ruby, Python, Rust, or Javascript. Is it Java? I don't think this is something Elixir does better.
"No one particularly needs mentorship as long as they know how to use an LLM correctly."
The "as long as they know how..." is doing a lot of work there.
I expect developers with mentors who help give them the grounding they need to ask questions will get there a whole lot faster than developers without.
I think people should find out themselves, but the OP was quite explicit about it.
This is do-able, because it doesn't require much metalworking. This is technology from 1700-1750 or so, made from wood with a few metal bits. Roman technology was capable of that.
"Getting high on your own supply" is exactly what I'd expect from those immersed in this new AI stuff.
i am not sure how expensive it will be to generate shadows from fog ? that was the first thing i noticed when i looked at the article, specifically the 'foggy cube' thingy.
Isn't there a massive contradiction here though, on the one hand the slave can't write on the lintel and be seen in the future proving their worlds are not connected (vol 1 page 18), on the other hand there are all these artifacts that get dug up, proving that they are. Or am I misunderstanding something?
Can’t you replace a lot of Hollywood with AI now? What am I missing?
“Protected sexual speech” is such a bizarre phrase. Nobody who wrote the first amendment envisioned that. How can you say the First Amendment prohibits a democratically elected legislature from banning something that was never envisioned as being protected by the First Amendment by the people who wrote it? It makes no sense. Surely the views of either the writers of the first amendment of the past, or the democratically elected legislature in the present, must prevail.
Yes, of course there are bad actors, but this is false equivalence to equate science and the scientific method with basement randos.
Most importantly, most people don't understand scientific consensus vs. individual research papers or individual scientists. A major feature of the scientific method is that when an interesting result is published, it can be independently verified by lots of other researchers, and if they come to the same conclusion, that is excellent evidence that the result accurately describes the real world.
Scientists are people, and just like people everywhere they have biases and personal motivations. But again, the scientific method is much bigger than any individual or even group of scientists. If anything, being skeptical of unexpected results is a huge pillar of the scientific method. But skepticism alone is not enough - the next step is to look for validating research, not to say "hah, science is bullshit, let's trust this YouTube rando instead." As usual, I think Jessica Knurick does a great job explaining things: https://open.substack.com/pub/drjessicaknurick/p/trust-the-s...
And modern diesel trains just run a generator to power the electric motors.
> They do this because of the political clout they get with the control of these media properties
Bezos bought the Post for clout. Ellison (and his investors) are buying Warner Brothers first and foremost to make money.
I think all the points about IP reputation impact are well taken, but as someone who had to deal with the RIRs at an ISP before and who now works at a firm that buys blocks, I would 10x rather operate in today's environment than in the old RIR environment. It's transparent and predictable by comparison.
I never had much faith in reputation to begin with, and the residential block issue is muddied by the fact that large-scale residential proxies already make that an unreliable abuse check.
> A senior manager on reviewing a proposal asks them to synergize with existing efforts: Your work is redundant you're wasting your time.
> A senior director talks about better alignment of their various depts: We need to cut fat and merge, start identifying your bad players
In my experience neither one of those are automatically a sign of impending layoffs. Rather, it's an executive doing their job (getting the organization moving in one direction) in the laziest way possible: by telling their directs to work out what that direction is amongst themselves and come back with a concrete proposal for review that they all agree on. The exec can then rubber-stamp it without seriously diving into the details, knowing that everyone relevant has had a hand in crafting the plan. And if it turns out those details are wrong, there's a ready fall guy to take the blame and save the exec's job, because they weren't the one who came up with it.
Interestingly, this is also the most efficient way for the organization to work. The executive is usually the least informed person in the organization; you most definitely do not want them coming up with a plan. Instead, you want the plan to come from the people who will be most affected, and who actually do know the details.
If the managers in question cannot agree or come up with a bad plan, then it's usually time for layoffs. A lot of this comes down to the manager having an intuitive sense of what the exec really wants, though, as well as good relationships and trust with their peers to align on a plan. The managers who usually navigate this most poorly (and get their whole team laid off in the process) are those who came from being a stellar IC and are still too thick in the details to compromise, the Clueless on the Gervais hierarchy.
Layoffs as you consolidate operations between enterprises. See Capital One laying off thousands at Discover Financial after their acquisition.
Capital One to lay off more than 1,100 in latest cuts at Discover Financial HQ - https://news.ycombinator.com/item?id=47270442 - March 2026
Hayden AI, former CEO Chris Carson.
https://www.documentcloud.org/documents/27758555-hayden-ai-v...
TSA was never necessary, it was all theater to begin with. The median number of terrorism deaths per year in the U.S. for all years between 1970-2017 was 4 [1]. You have always been about 10x more likely to die from being struck by lightning than by being killed by a terrorist.
[1] https://en.wikipedia.org/wiki/Terrorism_in_the_United_States....
Reinforced flight deck doors are sufficient. See: the rest of the world. TSA is a jobs program and to soothe the irrational and those poor at risk management.
At $10 Billion A Year, TSA Still Fails 90% Of The Time—And Covers It Up - https://viewfromthewing.com/at-10-billion-a-year-tsa-still-f... - January 27th, 2025
TSA Admits New Machines Are Slowing Security To A Crawl—And Says Screening Won’t Improve Until 2040 - https://viewfromthewing.com/tsa-admits-new-machines-are-slow... - August 10th, 2024
> But TSA itself has filed in court documents that they’ve been unaware of actual threats to aviation that they’re guarding against, and they haven’t stopped any actual terrorists (nor with past failure rates at detecting threats were they deterring any, either).
Accidentally Revealed Document Shows TSA Doesn't Think Terrorists Are Plotting To Attack Airplanes - https://www.techdirt.com/2013/10/21/accidentally-revealed-do... - October 21st, 2013
Weird to see this kind of random Substack/X content on an official company blog.
This is an incredibly ignorant comment. Iran isn’t Palestine and it’s a category error to project your feelings about Palestine into Iran. Iran’s attacks on Israel aren’t like Palestinians attacking a country that is occupying their territory. Iran is a sophisticated country that uses military action to further its own geopolitical interests. Iran has been attacking Israel—launching missiles and funding terrorists like Hezbollah—for decades.
Moreover, out of all the countries that attack Israel, Iran has the least reason to do so. Iran is a thousand miles away from Israel and has no security concerns from Israel’s existence. In fact, the two countries had peaceful relations for decades before the Islamic theocracy took over in 1979. Iran was the second Muslim majority country to recognize Israel’s sovereignty and the two countries had peaceful relations for decades.
Iran is getting attacked by Israel because it has chosen for decades to launch offensive attacks against Israel for no reason.
> They find hundreds of guns in carry on baggage every year…
They don't exactly have a great track record in that regard.
https://www.nbcnews.com/news/us-news/investigation-breaches-...
"In all, so-called "Red Teams" of Homeland Security agents posing as passengers were able get weapons past TSA agents in 67 out of 70 tests — a 95 percent failure rate, according to agency officials."
(Don't worry, though. They fixed it... by classifing the reports. https://www.cbsnews.com/news/noem-dhs-watchdog-feuding-over-...)
I used emojis for a while. Every text had to have an emoji. I spent a lot of time scrolling through the emoji palette looking for the perfect emoji.
Eventually, I decided that was a complete waste of time and now I use words.
BTW, one of the things that turned me off from emojis is they looked like the stickers 2nd graders would use, along with a Playmobil look.
The HackerOne slop is because there's a financial incentive (bug bounties) involved, which means people who don't know what they are doing blindly submit anything that an LLM spots for them.
If you're running the security audit yourself you should be in a better position to understand and then confirm the issues that the coding agents highlight. Don't treat something as a security issue until you can confirm that it is indeed a vulnerability. Coding agents can help you put that together but shouldn't be treated as infallible oracles.
There is proof: there is no way the US and/or Israel would have done this if they knew that Iran had nuclear weapons.
That's roughly on par with saying nobody needs the internet or a library at all.
Back the 1920s having a personal library was fairly common for people with more than two dimes, they had this thing called an 'Ex Libris' which roughly translates as 'from the books of'. This was a little piece of paper, often very nicely designed that you glued to the first page of a book and then you could borrow it freely and sooner or later it would find its way back to you.
This was the rough equivalent of wikipedia, only a lot slower and less convenient. Then encyclopedias (which existed for a long time) became larger and larger, I had one from the 18th century that got lost in a move but it was a work of art, so much effort had gone into making that. The encyclopedias of the newer ages were however far larger and covered more subjects. Ever year a new batch of pages or the occasional reprint was the norm. And then personal libraries went the way of the dodo. Every time one of my family members dies there is always the same question: what will happen to all the books. These people - and me too - spent a fortune on their books, untold tens of thousands over a lifetime. They were well read, not 'browsing' information but actually reading - and occasionally writing.
That library in the article is exceptional in one way: that it does not look like it was shared. But I can totally sympathize: some people are focused on the number of digits on their bank account, others derive their sense of wealth and accomplishment from their bookshelves. I don't own any books I have not read, but I do understand people buying books that they intend to read at some point but never get around to.
As these things go, I'd be happy have a million more book hoarders, even if they don't read them all, so they can be passed on to the next generation of booklovers, assuming they can still be found.
That matches an observation made in that report from the recent Thoughtworks retreat: https://www.thoughtworks.com/content/dam/thoughtworks/docume...
> The retreat challenged the narrative that AI eliminates the need for junior developers. Juniors are more profitable than they have ever been. AI tools get them past the awkward initial net-negative phase faster. They serve as a call option on future productivity. And they are better at AI tools than senior engineers, having never developed the habits and assumptions that slow adoption.
> The real concern is mid-level engineers who came up during the decade-long hiring boom and may not have developed the fundamentals needed to thrive in the new environment. This population represents the bulk of the industry by volume, and retraining them is genuinely difficult. The retreat discussed whether apprenticeship models, rotation programs and lifelong learning structures could address this gap, but acknowledged that no organization has solved it yet.
My phenomenal observations are that it's been getting warmer during my lifetime, but as soon as I mention this in an online conversation I get slapped down with 'the climate is always changing' and 'n=1'.
Most climate change denial arguments eventually boil down to social assertions about the change believers having perverse incentives, like being greedy for grants to go on sailing vacations to Antartica or feather their academic nests.
“What kind of role are you looking for?”
“Technologist flavor of NTSB investigator.”
Why would you expect anything else? It's also easier for larger people to pick up heavier weights, larger bullets to make larger holes, etc. etc. Of course larger animals are going to have larger muscles in their digestive tracts in approximate proportion to everything else about them being larger.
I noted that too.
We need some mechanism in litigation (and imho in public life in general) that requires claims to be secured in some way. That is, if you go into court and make an argument like this, you have to chain it to consequences, such as being stripped of specific legal consequences or losing 10% of your shares or whatever.
It's illegal to commit perjury, but there are no real consequences for making bous legal arguments, and lawyers are structurally incentivized to make tacit misrepresentations on behalf of their clients - that is, to make inflated or handwavey claims in the hope that they're not challenged during the fact-finding stage, or even stipulated, due to an assumption of basic good faith.
A few people have been predicting touch screen macs every year forever and they’re always wrong. Apple won’t do a touch screen mac. You can’t look cool using a touch screen on a laptop.
That is actually a point that rarely gets brought up -- we're so concerned about the dangers of AI in warfare, we don't necessarily stop to think of where they may be able to do a better job at avoiding lethal errors.
That's not how modern LLMs are built. The days of dumping everything on the internet into the training data and crossing your fingers are long past.
Anthropic and OpenAI spent most of 2025 focusing almost expensively on improving the coding abilities of their models, through reinforcement learning combined with additional expert curation of training data.
> People would put on the streets for free
In the US people put "Little Free Libraries" in their yards. They're all over the place in the Seattle area.
People should look into consumer market share numbers before commenting.
More importantly, the US has actual bases there.
https://en.wikipedia.org/wiki/Al_Dhafra_Air_Base
And dozens of others, in Jordan, Iraq, Bahrain, Saudi Arabia, Kuwait, Qatar, Oman...
https://www.americansecurityproject.org/national-security-st...
Lately I've made making some AWS Lambda functions to do some simple things in Python and chose to use the ARM-based instances because there wasn't any reason not to.
Please consider supporting Dropbox as a target.
True, but that loss has been in for a while. Tourism began hemorrhaging a year ago from a combination of tariffs and ICE policy and Trump's bizarre obsession with Greenland (and associated alienation of former allies).
https://en.wikipedia.org/wiki/David_J._Farber
Related:
Dave Farber has died - https://news.ycombinator.com/item?id=46933401 - February 2026 (46 comments)
Link?
It's interesting that people are writing tools that go inside the weights and do things. We're getting past the black box era of LLMs.
That may or may not be a good thing.
> Teaching engineers to build production AI systems | AI agents, LLMs, ML, data engineering |
> In the newsletter, I wrote the full timeline + what I changed so this doesn't happen again.
> If you found this post helpful, follow me for more content like this.
So yeah, this is standard LinkedIn/X influencer slop.
The "Our World in Data" citation cuts off right as China's emissions started to decline. More recent data [1] indicates that China's emissions have been flat or falling since the beginning of 2024, and falling fast in the last quarter of 2025 (1%, which is huge on a quarterly basis).
China's decarbonization & renewable efforts have been paying off in a big way. EVs now have a 51% market share among new vehicles [2], exceeding every single major city in the U.S [3] (though the SF Bay Area comes close). Likewise, renewables are 84.4% of its new power plants in 2025 [4].
[1] https://www.carbonbrief.org/analysis-chinas-co2-emissions-ha...
[2] https://electrek.co/2025/08/29/electric-vehicles-reach-tippi...
[3] https://www.nytimes.com/interactive/2024/03/06/climate/hybri...
[4] https://en.cnesa.org/latest-news/2025/11/4/chinas-newly-inst...
Corporate jargon is a relatively recent development in business history.[1] It wasn't seen much until the 1950s and 1960s, when "organization development" and management consulting became an industry. Peter Drucker seems to have popularized it in the 1980s.
Then came PowerPoint.
Before that it was more of a political and religious style of communication. In those areas, speeches and texts designed to be popular but not commit to much dominate. Religious texts are notorious for their ambiguity.
The point seems to be to express authority without taking responsibility.
[1] https://www.rivier.edu/academics/blog-posts/circling-back-on...
I don't know why you're being downvoted, but it's exactly this.
Pre-emojis, there were so many times I misinterpreted a text, or had a text misinterpreted. Something that is obviously a joke or sarcasm or teasing with non-verbal communication, can come across as an insult without it. When somebody adds a wink emoji or similar at the end, it changes everything.
Emoji are fantastic at communicating tone and attitude alongside the text itself. They're not a 1-1 correspondence with non-verbal communication, or a perfect replacement, but they vastly improve the chances that something playful isn't misunderstood in a negative way.
It's incredible that this needed to be explained.
I wish developers put in the faintest amount of thought into UX instead of just throwing together the first thing they came up with.
Like, literally just add a photo of the app to your landing page. It's not rocket science.
> Is the US going to invade all potential nuclear weapons developers like it did with Iran?
Invade? No. Bomb? Probably. Same if for India if e.g. Sri Lanka decides it wants nuclear weapons.
Global warming will unfortunately disproportionately hit poor, equatorial countries. (Also, starving countries can’t afford a nuclear programme. There is no breakout risk in Sudan.)
> Calling it "Iran's war effort" just feels wrong
Iran’s [war effort]. Not [Iran’s war] effort.
The interception disparity really drives home how air superiority dominates modern war planning. (That and not getting scammed on Russian kit.)
When you convert BM25 or other classical IR scores to a probability with this toolbox:
https://scikit-learn.org/stable/modules/calibration.html
you find a lot of things that are unsatisfying such as you never got a relevance score better than p=0.7 or so and that's very very rare. There are many specific problems in IR for which that kind of probability would be really helpful such as combining results that came from different sources or returning a stream of new documents from a collection but it was an early decision in TREC to not reward ranking functions that were good probability estimators or even that are good at the top-1 or top-3 positions but rather reward them for still being enriched in relevant results when you go deep (like 1000 results deep) into the results.
> Consumer refunds ain't gonna happen
If you actually paid the tariff you’re eligible. I got some surprise bills that I paid and didn’t sell off—I’m looking forward to being refunded.
Put another way, consumers who bought from an American retailer are being punished relative to those who paid an overseas seller.
> if we re-did the election today, we'd have the same outcome
Doubtful. The faithful will always be idiots. But around them are vast seas of folks who change their minds and even switch parties. Between foreign policy, vaccines (weirdly, not being nutter enough) and Noem turning ICE into a pageant show, a lot of Trump voters feel betrayed. It’s why the House flipping is almost a given.
The distinction between + and - is useful even without either of those.
Are you going for a record in bad takes? Your account is a month old and yet I recognize it on sight for the bs takes. Try a bit harder please.
Unfortunately most political systems around the world reward short term results, not long term thinking.
Just look here in the USA -- the Democrats tried to do some forward thinking things like subsidizing solar and wind, and they were rewarded by losing at the ballot box (of course that isn't the only reason, but it's one of many).
There are no rewards for long term thinking, so it's hard to get anyone to do it.
> Problem #3: He approved "terraform destroy" which obviously nukes the DB! It's clear he didn't understand
The biggest danger of agents its that the agent is just as willing to take action in areas where the human supervisor is unqualified to supervise it as in those where it isn't, which is exacerbated by the fact that relying on agents to do work [0] reduces learning of new skills.
[0] "to do work" here is in large part to distinguish use that focuses on the careful, disciplined use of agents as a tool to aid learning which involves a different pattern of use. I am not sure how well anyone actually sticks to it, but at least in principal it could have the opposite effect on learning of trust-the-agent-and-go vibe engineering.
Don't miss Algonquin park. It's amazing.
The GPL itself is copyrighted and the FSF expressly forbids variants.
And now that you know that it isn't, do you feel differently about the logic you used to write this comment?
"People letting an LLM agent rip on a production system get what they deserved" - I fixed the title.
Go 2.0 already exists, Java, D, C#, Swift, F#, OCaml,....
The community is special and now with the original authors mostly gone, and AI into the mix, I don't see it ever happen.
We will get ridiculous Go 1.xyzabc version numbers.