HN Leaders

What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.

jerf ranked #32 [karma: 91234]

Worse yet, the problems are going to be real.

There's a lifecycle to these hype runs, even when the thing behind the hype is plenty real. We're still in the phase where if you criticize AI you get told you don't "get it", so people are holding back some of their criticisms because they won't be received well. In this case, I'm not talking about the criticisms of the people standing back and taking shots at the tech, I'm talking about the criticisms of those heavily using it.

At some point, the dam will break, and it will become acceptable, if not fashionable, to talk about the real problems the tech is creating. Right now there is only the tiniest trickle from the folk who just don't care how they are perceived, but once it becomes acceptable it'll be a flood.

And there are going to be problems that come from using vast quantities of AI on a code base, especially of the form "created so much code my AI couldn't handle it anymore and neither could any of the humans involved". There's going to need to be a discussion on techniques on how to handle this. There's going to be characteristic problems and solutions.

The thing that really makes this hard to track though is the tech itself is moving faster than this cycle does. But if the exponential curve turns into a sigmoid curve, we're going to start hearing about these problems. If we just get a few more incremental improvements on what we have now, there absolutely are going to be patterns as to how to use AI and some very strong anti-patterns that we'll discover, and there will be consultants, and little companies that will specialize in fixing the problems, and people who propose buzzword solutions and give lots of talks about it and attract an annoying following online, and all that jazz. Unless AI proceeds to the point that it can completely replace a senior engineer from top to bottom, this is inevitable.

jerf ranked #32 [karma: 91234]

Amusingly, that's unnecessary, but possibly not for the reason most people think. It's not because the hard drive hardware is oblivious to runs of 0s and 1s exactly... it's because it's actually so sensitive that it already is recording the data in an encoding that doesn't allow for long runs of 0s and 1s. You can store a big file full of zeros on your disk and the physical representation will be about 50/50 ones and zeros on the actual storage substrate already. Nothing you do at the "data" layer can even create large runs of 0s or 1s on the physical layer in the first place. See https://www.datarecoveryunion.com/data-encoding-schemes/

tptacek ranked #1 [karma: 417173]

Wait: "write tests first" isn't simple and it's controversial. The benefits of TDD in pure-human development are debatable (I'd argue, in many cases, even dubious). But the equation changes with LLMs, because the cost of generating tests (and of keeping them up to date) plummets, and test cases are some of the easiest code to generate and reason about.

It's not as simple an observation as you're making it out to be.

pjmlp ranked #17 [karma: 126613]

I do, and hope that one day stuff like dependent types and formal proofs are every day tools, alongside our AI masters, which also don't use any learnings from scientific research.

pjmlp ranked #17 [karma: 126613]

We just started another war with unforeseen consequences to the planet, while destroying all resources to feed the AI monster, forget about all those paper straws.

pjmlp ranked #17 [karma: 126613]

While using C extensions, and yes Microslop rather have you using C++.

https://herbsutter.com/2012/05/03/reader-qa-what-about-vc-an...

Even if in recent years after tbat post they added support for C11 and C17, minus some stuff like aligned mallocs.

PaulHoule ranked #25 [karma: 105364]

Can't wait until we've reached the point where AIs are calling out the rare blog post written by humans.

PaulHoule ranked #25 [karma: 105364]

Just deleting 40,000 files from the node_modules of a modest Javascript project can thoroughly hammer NTFS.

paxys ranked #41 [karma: 80536]

"Hey ChatGPT, my NYC landlord is raising my rent by $500, and says I must pay by Monday or leave. What do I do?"

ChatGPT - This is very likely illegal under Housing Stability and Tenant Protection Act of 2019 (HSTPA), specifically New York Real Property Law § 226-c (Notice required for rent increases), RPL § 232-a / § 232-b (Month-to-month termination), RPL § 232-c (Fixed-term lease protections), RPAPL § 711 (Legal eviction procedure) and NYC Admin Code § 26-501+ (Rent stabilization). Here's what you should reply with... And here are some city resources you can contact...

ChatGPT now - IDK, pay a lawyer.

So under the guise of "protection" you are taking away the strongest knowledge tool common people have had at their disposal in a generation, probably ever.

jerf ranked #32 [karma: 91234]

But... why?

How do you win moving your central controller from a 4GHz CPU to a multi-hundred-MHz single GPU core?

If we tried this, all we'd do is isolate a couple of cores in the GPU, let them run at some gigahertz, and then equip them with the additional operations they'd need to be good at coordinating tasks... or, in other words, put a CPU in the GPU.

WalterBright ranked #43 [karma: 79047]

Value is what you're willing to pay for something.

Laundromats aren't particularly profitable businesses.

PaulHoule ranked #25 [karma: 105364]

My agent might refuse to use stablecoins when it can run up 2% cash back for me on my cards.

minimaxir ranked #47 [karma: 73752]

Likely yes, since LinkedIn has no API.

dragonwriter ranked #16 [karma: 127453]

> Why do school board super-intendants and administration make more money than teachers themselves?

The more and less cynical explanations (and both play a role, IMO):

(1) Because individuals in those roles have closer relationships to the people that set the salaries than do individual teachers, and

(2) Because otherwise people with experience in education would continue as teachers and not seek roles as superintendents or other administrators (or seek the advanced degrees sought for those roles whose only financial payoff is greater competitiveness for those higher paying roles.)

PaulHoule ranked #25 [karma: 105364]

Hilarious too that he uses the word "wasteland" for something that's supposedly good. Perhaps it is a double coding where all of us normies mock it but the FOMO-blinded have their own private reading and say the rest of us are totally wrong.

Reminds me of those 419 emails where the grammar is bad, the story makes no sense, but hey there really are people who expect $10 million to fall out of the sky because they already had $10 million fall out of the sky on them.

mooreds ranked #35 [karma: 88728]
tptacek ranked #1 [karma: 417173]

DNSSEC can't protect against an ECH downgrade. ECH attackers are all on-path, and selectively blocking lookups is damaging even if you can't forge them. DoH is the answer here, not record integrity.

steveklabnik ranked #29 [karma: 97110]

> Can we make an entirely new programming language? Can we make an OS?

I have seen both of these already. I've done the former personally, and I've seen links to at least kernels for the latter.

(I didn't do it via gastown, just regular old "use Claude".)

bookofjoe ranked #26 [karma: 103692]
tptacek ranked #1 [karma: 417173]

I'd argue the most interesting part of this piece isn't what it says about Paul Graham, but rather the observation it makes about writing. I think about "It Turns Out" all the time, and it's virtually never because I'm in that moment caring about something Graham wrote.

PaulHoule ranked #25 [karma: 105364]

For all of us who can't get on the housing ladder because we're spending it all on subscriptions: https://archive.ph/5hJpS

PaulHoule ranked #25 [karma: 105364]

With the strange position of French philosophy in the anglosphere I guess the right needs a French philosopher. I mean, you see popular books about French theory in normal countries, like Japan, but because the French colonized Britain, English is packed with high-fallutin' words from French (L'Ensemble, Quotodien, ...) that make undertranslated texts reek with assumed superiority, French theory puts the "gate" in "gatekeeping" of our academy.

simonw ranked #27 [karma: 99977]

The thing I'm most excited about is the moment that I run a model on my 64GB M2 that can usefully drive a coding agent harness.

Maybe Qwen3.5-35B-A3B is that model? This comment reports good results: https://news.ycombinator.com/item?id=47249343#47249782

I need to put that through its paces.

paxys ranked #41 [karma: 80536]

Mac Mini is already the Mac Mini version of this. How much lower in price can it realistically get?

dragonwriter ranked #16 [karma: 127453]

> Isn't social security a thing?

Social Security alone will, at best, slightly mitigate poverty. 401Ks are generally employee-funded, with some firms providing matching funds, especially during good economic times and where the firm is in a field where the main area of labor relied on relatively scarce so that there is competition for talent.

EDIT: The line about social security is a little inaccurate in the extreme case; its actually technically possible to reach a moderate income ($62k/year) on Social Security, if you have a long enough working career (35 years or more) earning at the maximum taxed wages for Social Security (currently $185k+) and claim at or beyond the age that maximizes the benefit calculation (70 years).

rbanffy ranked #5 [karma: 186390]

I’m not sure that many people want Windows badly enough they would get an Apple device and remove the original OS so they could run Windows.

From my personal experience, Widows users in general don’t mind Windows, but, definitely, nobody I have ever met finds it more desirable than macOS.

PaulHoule ranked #25 [karma: 105364]

Personally I see conservatives have drunk the worst Kool-Aid of the "woke", that is they speak the same language of marginalization, discrimination, and it leads to the conclusion that they need some kind of "affirmative action" no matter how much they deny it.

rayiner ranked #18 [karma: 125848]

For daily use the single core speed is the most important. Web browsing, UI render, etc., is still single threaded mostly.

crazygringo ranked #38 [karma: 82325]

> id like to see Apple themselves optimise the macOS experience for 8gb Ram.

How is it not already? MBAs with 8 GB of RAM run great. Macs are incredibly good with memory management.

Brajeshwar ranked #50 [karma: 71886]

What’s going on with Apple? Are they doing one-hardware-a-day week/month now?

pjmlp ranked #17 [karma: 126613]

In our agency that would be a plus, because we focus on customising existing products instead of building everything from scratch.

Exactly because that means less costs for software development when deliverying solutions.

paxys ranked #41 [karma: 80536]

From the marketing it’s obvious that this is built for students, so I doubt they intend for the useful life to be greater than 3-4 years.

jacquesm ranked #2 [karma: 239417]

This is a very nice development, but it is tackling the easy stuff. I'd love to see an open source inverter that can operate in stand-alone mode or in grid connected mode. All of these grid connected devices with closed source are a massive risk, especially given how small and cheap a WiFi or cell modem is nowadays.

paxys ranked #41 [karma: 80536]

There seem to be more AI app building platforms than actual apps being built these days.

paxys ranked #41 [karma: 80536]

It's possible that they are selling it close to cost to get more young people into the macOS/iOS/iPadOS ecosystem. If you can translate each one of these into a "Pro" device sale down the line then it's a win for Apple.

jerf ranked #32 [karma: 91234]

Everything on a disk ends up as a linear sequence of bytes. This is the source of the term "serialization", which I think is easy to hear as a magic word without realizing that it is actually telling you something important in its etymology: It is the process of taking an arbitrary data structure and turning it into something that can be sent or stored serially, that is, in an order, one bit at a time if you really get down to it. To turn something into a file, to send something over a socket, to read something off a sheet of paper to someone else, it has to be serialized.

The process of taking such a linear stream and reconstructing the arbitrary data structure used to generate it (or, in more sophisticated cases, something related to it if not identical), is deserialization. You can't send anyone a cyclic graph directly but you can send them something they can deserialize into a cyclic graph if you arrange the serialization/deserialization protocol correctly. They may deserialize it into a raw string in some programming language so they can run regexes over it. They may deserialize it into a stream of tokens. This all happens from the same source of serialized data.

So let's say we have an AST in memory. As complicated as your language likes, however recursive, however cross-"module", however bizarre it may be. But you want to store it on a disk or send it somewhere else. In that case it must be serialized and then deserialized.

What determines what the final user ends up with is not the serialization protocol. What determines what the final user ends up with is the deserialization procedure they use. They may, for instance, drop everything except some declaration of what a "package" is if they're just doing some initial scan. They may deserialize it into a compiler's AST. They may deserialize it into tree sitter's AST. They may deserialize it into some other proprietary AST used by a proprietary static code analyzer with objects designed to not just represent the code but also be immediately useful in complicated flow analyses that no other user of the data is interested in using.

The point of this seemingly rambling description of what serialization is is that

"why keep files as blobs in the first place. If a revision control system stores AST trees instead"

doesn't correspond to anything actionable or real. Structured text files are already your programming language's code stored as ASTs. The corresponding deserialization format involves "parsing" them, which is a perfectly sensible and very, very common deserialization method. For example, the HTML you are reading was deserialized into the browser's data structures, which are substantially richer than "just" an AST of HTML due to all the stuff a browser does with the HTML, with a very complicated parsing algorithm defined by the HTML standard. The textual representation may be slightly suboptimal for some purposes but they're pretty good at others (e.g., lots of regexes run against code over the years). If you want some other data structure in the consumer, the change has to happen in the code that consumes the serialized stream. There is no way to change the code as it is stored on disk to make it "more" or "less" AST-ish than it already is, and always has been.

You can see that in the article under discussion. You don't have to change the source code, which is to say, the serialized representation of code on the disk, to get this new feature. You just have to change the deserializer, in this case, to use tree sitter to parse instead of deserializing into "an array of lines which are themselves just strings except maybe we ignore whitespace for some purposes".

Once you see the source code as already being an AST, it is easy to see that there are multiple ways you could store it that could conceivably be optimized for other uses... but nothing you do to the serialization format is going to change what is possible at all, only adjust the speed at which it can be done. There is no "more AST-ish" representation that will make this tree sitter code any easier to write. What is on the disk is already maximally "AST-ish" as it is today. There isn't any "AST-ish"-ness being left on the table. The problem was always the consumers, not the representation.

And as far as I can tell, it isn't generally the raw deserialization speed nowadays that is the problem with source code. Optimizing the format for any other purpose would break the simple ability to read it is as source code, which is valuable in its own right. But then, nothing stops you from representing source code in some other way right now if you want... but that doesn't open up possibilities that were previously impossible, it just tweak how quickly some things will run.

pjmlp ranked #17 [karma: 126613]
simonw ranked #27 [karma: 99977]

It's the closest I have to touching on code review so far.

rayiner ranked #18 [karma: 125848]

You’re overlooking the fundamental difference between Iranian society and Afghan society. In Afghanistan, the U.S. was trying to bomb a place that was always a collection of small feudal states into being a functioning country. In Iran, it’s trying to dislodge a theocracy that’s taken over a country that’s had orderly, centralized administration for almost two thousand years.

I wouldn’t bet on either approach working. But a good outcome in Afghanistan was always completely hopeless. A good outcome in Iran is merely unlikely.

Brajeshwar ranked #50 [karma: 71886]

Btw, Cloudflare has a Speed Test at https://speed.cloudflare.com

stavros ranked #45 [karma: 76049]

Or crappily built consumer browsers, extensions, proxies, caches, and other valid stuff you want working well.

stavros ranked #45 [karma: 76049]

"While the spoof key isn't blacklisted" is the critical bit. Soon, all the keys will be, as these old devices age away from being too common to blacklist.

pjc50 ranked #24 [karma: 106801]

The problem with this discussion is that this is a wonk solution for wonkish times. You're trying to thread the needle between various reasonable compromises. Ironically due to social media, that is simply not how politics and lawmaking works any more. Instead it's an emotionally driven fight between various different sorts of moral panic, and the only option is to get people more mad about surveillance than "think of the children".

You might be able to get somewhere by getting a tech company on your side, but they generally also hate adult content and don't mind banning it entirely.

(people are not going to get age verification _banned_ any time soon! That's simply not going to happen!)

coldtea ranked #33 [karma: 90320]

>my tinfoil hat theory is that they make small features depend on new hardware.

The general case is hardly a "tinfoil hat theory". They openly do that, and the major reason is to tie to new hardware adoption.

That said, it doesn't usually work like you call it. It's not adding new features depending on HW optimization to slow older machines down (after all one could just not use those features in an older machine, or toggle them off).

It's rather: you want to get these shiny new features, which is all we advertise for iOS/macOS N+1, and the main new changes? The big ones will only work if you have a newer machine, even though we could trivially enable them on older machines (and some don't even need special hardware, as there are third-party hacks that unlock them and they work fine).

signa11 ranked #37 [karma: 86763]

slop-steganography is that a name || a verb ?

signa11 ranked #37 [karma: 86763]

> When AI writes the software, who verifies it?

oh thats quite simple: the dude / dudette who gets blamed is the one who verifies it.

bookofjoe ranked #26 [karma: 103692]
bookofjoe ranked #26 [karma: 103692]
walterbell ranked #30 [karma: 97081]

https://www.linkedin.com/pulse/memory-supply-chain-ai-disrup...

  Apple has accepted a 100% price increase for Samsung's LPDDR5X memory, with DRAM supply commitments secured only through the first half of 2026. Tim Cook acknowledged during the Q1 FY2026 earnings call that storage price increases would significantly impact Q2 gross margins.. Apple is evaluating ChangXin Memory Technologies (CXMT) and Yangtze Memory Technologies (YMTC) as new supply sources, attempting to rebuild pricing leverage through supply chain diversification.

hn_throwaway_99 ranked #46 [karma: 75592]

Glad I saw a comment like this.

TBH, while I may find the output style somewhat infomercial-ly, I don't really get the hatred. ChatGPT IS NOT AN ACTUAL PERSON. Like why do people care so much? Like you said, I just ignore the "persona" phrases, and just use ChatGPT (or, used to anyway, before switching to Claude because OpenAI leadership can suck it) to get information and answer my questions.

Seriously, though, just stop using ChatGPT in any case, there are very good reasons to boycott it and there are other alternatives. Not saying the alternatives are saintly, but they're not as awfully duplicitous as OpenAI.

userbinator ranked #36 [karma: 88176]

Having absolutely zero prior context, I thought the title was an old obscure Nintendo game.

userbinator ranked #36 [karma: 88176]

Never update your BIOS unless you have a specific bug that needs fixed.

I remember a Thinkpad BIOS update ended up destroying both undervolting and overclocking, and required a "chip-clip" programmer to revert.

PaulHoule ranked #25 [karma: 105364]

Ashwagandha is awful in my opinion: like Valium it relieves anxiety and makes you stupid but the ratio is worse.

rayiner ranked #18 [karma: 125848]

The WWW has been trending towards junk since the early 2000s. HN is still the pinnacle of web design.

rayiner ranked #18 [karma: 125848]

> I think that developing (in case of desktop) for 3 different platforms all with own complication of what is native UI is a nightmare. macos has swiftui (incomplete), uikit and appkit, linux in practice gtk/qt, windows winui 3 (fundamentally broken) with WPF and WinForms still hanging around .

Wouldn’t it be a good use of AI to port the same app to several native platforms?

anigbrowl ranked #28 [karma: 99027]

I had to laugh (bleakly) when someone asked him the other day who the US would prefer to see leading Iran instead of Khamenei and he admitted that all his preferred alternatives were dead.

No way to know if it's true for the time being, but I saw someone claim the other day that Khamenei had convened meetings of the more moderate senior clergy at his residence in anticipation of the attack, increasing the probability of a hardliner being chosen as his successor. Though speculative, it seems plausible.

anigbrowl ranked #28 [karma: 99027]

Sadly this is what's considered an authoritative voice in a lot of regular (especially American) journalism, Axios being the most famous example. It's instructive to read news stories or TV transcripts from previous decades for comparison with the current norm. Also depressing because it brings home how vapid most news coverage is today. This also applies to opinion articles, which have in my view led the charge into the semantic void.

I don't hate that this is the default style on many popular AI services, though. It's sufficiently distinctive that it serves as a signal that anyone posting it is an idiot and can safely be ignored.

jacquesm ranked #2 [karma: 239417]

They could have just come to an agreement, that way it would not lead to trademark dilution.

jacquesm ranked #2 [karma: 239417]

The Helsinki bike infrastructure is even better than the Dutch one, if you spend time there, get a bike!

walterbell ranked #30 [karma: 97081]

Security policy usually defaults unknown artifacts to low privileges.

stavros ranked #45 [karma: 76049]

Yes, it is a fantastic joke and I laughed for ages, well played.

mooreds ranked #35 [karma: 88728]

AKA "make the right things easy" and "build sensible defaults" rather than "all the responsibility is the individuals".

Animats ranked #10 [karma: 159937]

"Once stopped, the ADS-equipped vehicle contacted remote assistance with a prompt asking, “is this a school bus with active signals?” After another passenger vehicle passed the school bus in the right eastbound through-lane, a remote assistance agent located in Novi, Michigan, replied “No” to the prompt. The ADS-equipped vehicle then resumed travel and passed the school bus while its stop arms were still extended. A passenger vehicle following the ADS-equipped vehicle similarly passed the school bus. In total, six vehicles passed the school bus while it was stopped. A crash did not occur."

anigbrowl ranked #28 [karma: 99027]

I suggest that the US is putting its citizens at considerably more risk than they were in last week.

rbanffy ranked #5 [karma: 186390]

There is a whole universe of good-enough desktop computers that doesn't care that much about performance, but where power consumption is important, because it makes the computer bulky, noisy, and expensive.

I'd love to have a Xeon 6, a big EPYC, or an AmpereOne (or a loaded IBM LinuxOne Express) as my daily driver, but that's just not something I can justify. It'd not be easy to come up with something for all this compute capacity to do. A reasonable GPU is a much better match for most of my workloads, which aren't even about pushing pixels anymore - iGPUs are enough these days - but multiplying matrices with embarrassingly low precision, so it can pretend to understand programming tasks.

crazygringo ranked #38 [karma: 82325]

This article is missing some important aspects/context:

> If the card processing cost is 4 percent of the sale price, the fee amounts to $6. That $6 is not 4 percent of the profit; it is 12 percent of the merchant’s margin.

Sure, but merchants are raising prices overall together with all their competitors, or charging more when using a card. Credit cards aren't taking away 12% of merchants' profits that they'd keep otherwise.

Also, credit card fees are not 4%, they're 1.5-3.5% with an average of around 2.3%.

> The merchant may pay little or nothing per transaction, the funds usually arrive immediately and no physical terminal is required.

But what's missing here is fraud protection. It's more like debit cards than credit cards, and debit card transactions are much cheaper in the US too (more like 0.7%).

Now that it's increasingly common for local merchants to implement a credit card surcharge (so non-CC users don't have to pay more), and a large percentage of credit card fees come straight back to the user as rewards (e.g. a 2% cash reward), it's not really clear that payment fees matter all that much in the end. See:

Fed Data Shows Economics of Interchange: 86% of Fees Fund Rewards Programs: https://www.pymnts.com/news/loyalty-and-rewards-news/2025/fe...

tptacek ranked #1 [karma: 417173]

I expect the evidence for this claim is axiomatic, which is to say that you think it sounds good.

coldtea ranked #33 [karma: 90320]

For a FOSS contributor, GvR always gave me the impression that he'll work on whatever when there's funding, and abandon the work/domain when that dries up. Not exactly hype+dump but no major internal drive to see those things through like someone like Linus or even someone like Jonathan Blow has.

Like his early work on programming for kids (CP4E), and all kinds of short-lived Python projects like Unladen Swallow, with the MS funded speedup work being a later example.

coldtea ranked #33 [karma: 90320]

>Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

Why? The logic of ever less personal pride, involvement, and care, is eventually to just put the blame on AI and be done with it.

Issues? Casualties? It's a bug, somebody fixes it and we move on. Or is just a cost we need to get used to to live in the great new world of AI.

We're in an era where nobody involved goes to jail for the Epstein case, and the world keeps turning, and we think people will care if nobody goes to jail if somebody loses their pension or gets wrongly imprisoned or dies on an operating table because of AI mistake?

If anything, legal, union and other limitations like that on who gets to decide (having to have a human ultimately responsible) might be torn down, to fullu embrace the blame-shifting capabilities of the digital bureucracy.

tptacek ranked #1 [karma: 417173]

The system of copyright enshrined at the time of the founders is drastically more restrictive than this, which undercuts your "sick joke designed to enrich lawyers" line.

tptacek ranked #1 [karma: 417173]

Right, but one problem is that people with kids do care a lot that they're going to school in the dark.

jacquesm ranked #2 [karma: 239417]

That only works as long as the powers that be don't start quoting this stuff themselves.

JumpCrisscross ranked #7 [karma: 177193]

Herman Miller's Jarvis [1]. I'm probably paying up for the brand, but I got it installed a few years ago (with the nano-textured Studio display), and it works beautifully.

[1] https://store.hermanmiller.com/home-desk-accessories/jarvis-...

walterbell ranked #30 [karma: 97081]

https://arstechnica.com/gadgets/2026/03/m5-pro-and-m5-max-ar...

  Apple says that M5 Pro and M5 Max use an “all-new Fusion Architecture” that welds two silicon chiplets into a single processor. Apple has used this approach before, but historically only to combine two Max chips together into an Ultra.

  Apple’s approach here is different—for example, the M5 Pro is not just a pair of M5 chips welded together. Rather, Apple has one chiplet handling the CPU and most of the I/O, and a second one that’s mainly for graphics, both built on the same 3nm TSMC manufacturing process.

Animats ranked #10 [karma: 159937]

It begins to look like Citizens United [1] cost us democracy.

"That government of the rich, by the rich, and for the rich shall not perish from the earth."

[1] https://en.wikipedia.org/wiki/Citizens_United_v._FEC

dragonwriter ranked #16 [karma: 127453]

US states cannot, under current federal law, go permanent daylight time (they can go permanent standard time, though), and they can't unilaterally make the latter have the same effect as the former by simultaneously switching time zones because they can't switch timezones without approval of the federal Department of Transportation. I don't see the current federal government making special accommodation for California, Oregon, or Washington on, well, anything in the next eight months, so...

PaulHoule ranked #25 [karma: 105364]

I’ll add that MacOS is crammed with spammy ads for Apple Music and other services I don’t want. To be fair somebody wants Apple Music whereas the Microsoft versions of those things are completely unwanted.

Ads and nags in the Windows World are drawn using the same HTML-based technology that has replaced Windows native apps since Windows 8, the ads and nags in MacOS are the 2025 anti-antialiased retreads of the 1999 MacOS X imitations of the modal dialogs from 1984 MacOS classic. It’s sad. When I set up a new Mac for my wife she was furious at how ad infested it was, especially to browse the web with Safari and if you want to add an ad blocked you need an Apple Account which is something she’s done without using macs for 20+ years.

dragonwriter ranked #16 [karma: 127453]

Note that this has very little bearing on the real interesting questions of whether and when human authors can copyright works where AI was used as a tool; this case is specifically about attempts by Thaler to apply for copyright listing an AI as author of a work for which he explicitly denied any human authorship.

jerf ranked #32 [karma: 91234]

From my post at https://jerf.org/iri/post/2026/what_value_code_in_ai_era/ , in a footnote:

"It has been lost in AI money-grabbing frenzy but a few years ago we were talking a lot about AIs being “legible”, that they could explain their actions in human-comprehensible terms. “Running code we can examine” is the highest grade of legibility any AI system has produced to date. We should not give that away.

"We will, of course. The Number Must Go Up. We aren’t very good at this sort of thinking.

"But we shouldn’t."

dragonwriter ranked #16 [karma: 127453]

> And if the big labs are able to reliably block distillation,

The big labs will not be able to reliably block distillation without further inhibiting general use of the models, which itself will help tip the balance away from commercial models.

jerf ranked #32 [karma: 91234]

"It is entirely jarring if an AI NPC that says something that's not consistent with the game state, or changes the game state in a way that violates the constraint of the understood rules/boundaries of the game"

I played with the Mantella mod for Skyrim a few months back and one of the problems with LLMs is you can't keep them on topic. I even used a custom trained one just for Skyrim, but the problem was it still has vast real world knowledge it shouldn't. For instance I asked a town guard where I could find Taylor Swift, who said she might be down at the tavern playing music. While the conversation didn't super overtly break the theme, the guard "stayed in character" and didn't start gushing about specific songs of hers or something, he still "knew" who she was. Current-gen AIs can't be fenced in very well. And almost every game idea needs some sort of fencing in.

If you play along with the AI it's not bad but if you poke the edges the illusion breaks quickly. You can't prompt a current-gen AI to just "forget everything you shouldn't know because it doesn't fit in the game universe."

I expect future architectures probably will fix this and that will help a lot. But we don't have them yet.

ceejayoz ranked #34 [karma: 89492]

See also: Grocery stores. Prices went up "due to COVID". Prices will never come down again.

(I've no doubt the supply chain was a mess for a hot minute, but years later?)

PaulHoule ranked #25 [karma: 105364]

I think there's a certain antipathy between "hustle culture" and gaming

https://components.news/the-gamer-and-the-nihilist/

that is is, people who are caught in AI FOMO are performatively trying to appear to be productive and that's the opposite of fun.

dragonwriter ranked #16 [karma: 127453]

Is it more likely to be clear and reliable if it is AI-written, or are features associated (both directly and by correlation) with clear writing increasingly misperceived as “AI tells” because they are also favored in LLM training?

jedberg ranked #44 [karma: 77921]

Not sure I agree (and I made the jump from IC to management).

Look at the parallel tracks. A VP is the same level as a distinguished engineer, roughly. To be a VP, you have to be a great manager and got lucky with a few big projects.

To be a DE, you basically have to be famous within the industry. And when I look at a large tech company, while there aren't a lot of VPs, usually the number of DEs is countable on one hand (or maybe two).

They are very different skill sets. You shouldn't choose your role based on money or career progression, you should choose based on what you love to do, because especially in this world of AI replacing all the "boring" work, the only people who will be left will be the ones passionate about what they are doing.

steveklabnik ranked #29 [karma: 97110]

Regular variable definition shadows. Macros expand to regular Rust code, they could always be replaced by the expanded body.

toomuchtodo ranked #23 [karma: 107017]

The credential is of questionable value, it’s a checkbox to enable international folks to buy their way in via educational visas and to soak US students for student loan debt that can’t be discharged. It’s gating economic outcomes, not an objective measure.

Brajeshwar ranked #50 [karma: 71886]

Welcome back. One of my staple YouTube Subscriptions.

I’m today years old learning that the light that we actually see on earth today came out 100s of thousands of years ago.

crazygringo ranked #38 [karma: 82325]

Google didn't do anything wrong, they lost their Yahoo and it was the only way they had of verifying their older Gmail. What do you expect, when you don't have access to your recovery method, and it's a free service so it's not like you can prove ownership of a credit card previously used for billing or something? And especially since that was presumably from before the days when Gmail required a phone number, so your recovery e-mail was the only mechanism, and things like 2FA authentication codes didn't exist.

bookofjoe ranked #26 [karma: 103692]

Spot on. 99+% of those reading/making these comments use an ad blocker; 99+% of non-techies like me never have and never will.

jacquesm ranked #2 [karma: 239417]

As you should be. I so far have not verified my age for anything, if that becomes a requirement I just bow out.

jacquesm ranked #2 [karma: 239417]

It isn't doing that now, but you can't be sure about what they're going to be up to a little ways down the line, the fact that they are clearly trying to misdirect the traffic is proof positive they're up to no good.

Just do a bit of risk assessment if something like this were to be shipped to people that have come to blindly trust the source and you'll see why letting this slip is a very bad idea.

minimaxir ranked #47 [karma: 73752]

The original comment is asking from a legal perspective in a very specific example, not an emotional one.

jacquesm ranked #2 [karma: 239417]

Youtube is insanely ineffecient compared to a good AI model in interactive mode.

jerf ranked #32 [karma: 91234]

World models do not belong here, or at least, we're still some years from figuring out how they would. If I want to text my wife I can imagine just telling my phone that somehow rather than using the current UI paradigm, but what "world model" am I going to pull up that is helpful to do that? World models belong in their own stream, along with rendering in general (movies, etc.), games, VR, and other similar things that we do not today classify as UI, for good reason.

In fact I suspect "world models" may let us re-experience some idiocy from yesteryear we thought we had put behind us, like [1]. Can't wait to go "shopping" in a "world model" of a store again! However do I survive in 2026 merely zipping around the store buying my favorite items off of my favorite's list as fast as I can think of the items and using search on the thousands of available items rather than WASD'ing my way through a "model" of the store.

By contrast I think the browser is undersold. GUI toolkits existed before browsers, but they were all based on widget layouts. That is, the top level of the widget hierarchy would be some layout engine, which had components, which had subcomponents, which had a widget, etc. Some were more dynamic and relative, some used a lot of absolute positioning, but they were all structured in this way. Browsers introduced a new paradigm, where textual layout was the "top level" of the tree, and the widgets all fit within that. Prior to a browser, a Mad Lib-style game where you have text boxes interspersed in a bunch of text was quite difficult. Many GUI toolkits would have required an individual absolutely-laid-out pane for each game you want to play because it couldn't do its own layout on interspersed text and widgets at all; most if not all of them (perhaps Tk excepted, though I'm not familiar with what it could do in the 1990s as I picked it up later) would have made heavy weather of it if they could do it. (Although GUIs made heavy weather of things in general before browsers.) Now all the GUI toolkits have a lot more support for textual-layout like browsers and of course the browsers have carried on like crazy.

AIs-running-in-browsers seem a very powerful paradigm to continue forward with.

With regard to the "world model" I would see "augmented reality" being the major move forward there. It has consistently failed for a long time but I think there's a very plausible case to be made that it was just premature tech, that it doesn't work without the powerful AIs that only recently came out and are still pretty hard to stuff on to a realtime platform. That will start to enable some very interesting UI paradigms here at some point. But that still really doesn't replace WIMPs, it's more a new frontier entirely. Again, I don't necessarily want to "use augmented reality" to text my wife. AR may prompt for it in some particular circumstance, but if I'm originating one out of the blue I'm going to use a conventional UI to do that, not try to wrangle AR into it.

[1]: https://www.shamusyoung.com/twentysidedtale/?p=35440

rayiner ranked #18 [karma: 125848]

The labor market for recent college graduates is very soft right now, so it seems like a good time for a pause on importing foreign workers: https://www.investopedia.com/workers-who-attended-college-ar...

walterbell ranked #30 [karma: 97081]

When they don't have to pay a percentage of sales price as royalty to Qualcomm.

bookofjoe ranked #26 [karma: 103692]
bookofjoe ranked #26 [karma: 103692]

>Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

I remember growing up and always hearing "The computer is down" as an excuse for why things were cancelled/offices closed/buses and trains not running/ad infinitum.

At some point I read a article that pointed out that the reason the computer was down was because a person made a [coding] error: the computer itself was fine.

I've yet to read about how a person who caused the computer to be down was disciplined.

paxys ranked #41 [karma: 80536]

You are right this would never happen in an advanced country like the USA, and certainly not in a top Federal court

https://www.reuters.com/sustainability/society-equity/two-fe...