What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
Funny my experience is that properly written HTML parsers can be easy to specialize quickly for a wide range of web sites whereas just logging in to an API can be a battle with a Rube Goldberg machine for what… a license to suck through a coffee stirrer? I am still using a parser I wrote for Flickr image galleries 15+ years ago that frequently “just works” on new sites without modification and when it does take modification the new rules are a handful of LoC.
Almost always an APi is not a gift but rather a take-away.
> By the time 1983 rolled around
That early? There were people claiming that back then, but it didn't really work.
Indeed, wisdom is being able to see the lifecycle like a time knife and making favorable decisions based on past experiences (ie getting the most value from what may need to be built and operated for its lifetime). Writing code is easy, managing a living codebase with ever changing business requirements and stakeholders is hard.
> The same PM who proscribed PA defended in court a woman who did exactly what PA was doing, painting warplanes in protest
Out of curiosity, who?
I'm staying far away from this AI stuff myself for this and other reasons, but I'm more worried about this happening to those running services that I rely on. Unfortunately competence seems to be getting rarer than common sense these days.
> What made it worth it for you?
Opportunities. You don't need many readers, you just need the right readers. I'm a big believer in making your own luck - putting things in place that make luck more likely to strike. Having a collection of writing online that people might stumble onto is very effective way of doing that.
> What kinds of posts actually worked (for learning, career, network, opportunities)?
I've written a bunch about this in the past. TLDR version:
- Stuff I've learned: TIL style posts that describe something I've learned recently
- Stuff I've found: links to things that are useful, with an explanation of why they are useful
- Stuff I've built: descriptions of projects I've completed
What to blog about: https://simonwillison.net/2022/Nov/6/what-to-blog-about/
My approach to running a link blog: https://simonwillison.net/2024/Dec/22/link-blog/
> Any practical format that lowers the bar (length, cadence, themes)?
TILs are an incredibly liberating format. You don't need to be describing something that's never been written about before - just something that's new to you today.
> If you were starting today, what would you do differently
I'd use static publishing on GitHub Pages on myname.github.io so I don't even need to run any web hosting or buy a domain name.
need to override their user agents strings to "curl" to make the site load again.
That seems very on-brand for them, as curl's default UA gets blocked by most sites.
> I still eat salads containing it occasionally but I’d microwave the hell out of it
At the energy you'd need to reach to sterilise lettuce in a microwave, you've cooked off a lot of its nutrition. If you aren't reaching those temperatures, you're not improving on simply washing your greens.
> they have other ways to deprecate your device
This is a wild take for a company known for the long lives of its devices.
Interesting that this was flagged (I do wonder if that is reflexive or not). The main argument that context is essential to precision (in current architectures) is pretty solid.
I didn't see this game
Once again I am reminded that "knowing" which accounts are fake is a knowable thing and yet social media companies don't mitigate them "because money" or "because DAU" Etc. When I was running operations at Blekko (a search engine) we were busily identifying all the bots that were attempting ad fraud or scouring the web for vulnerabilities or PII to update "people" data bases. And we just mitigated them[1], even though it meant that from a 'traffic' perspective we were blocking probably 3 - 4 million searches / day.
[1] My favorite mitigation was a machine that accepted the TCP connection from a bot address and just never responded after that (except to keep alives) I think the longest client we had hung that way had been waiting for over 3 months for a web page that never arrived. :-)
(saulpw is the author of VisiData, and it's a marvelous piece of software.)
Some recommend non-edible petrol-based mineral oil (aka liquid parrafin) because it doesn’t go rancid, but has the same effect of not actually doing much for protection and will leak into hot liquids.
Highly-refined mineral oil is food-safe.
https://en.wikipedia.org/wiki/Mineral_oil#Food_preparation
Why even use wood if you’re going to cover it in a layer of clear plastic?
I find it amusing that those who will use wood or "natural" (petroleum is also naturally occurring...) products for some sort of weird misguided eco-virtue-signaling, inevitably end up needing to basically reinvent the chemistry of finding an inert, durable material that brought us modern plastics. All these drying oils create a layer of polymerised material, which can be classed as plastic anyway. Waxes, regardless of source, attribute their properties to long hydrocarbon chains, just like polyethylene.
> Sure, I can watch Opus do my work all day long and make sure to intervene if it fucks up here and there, but how long will it be until even that is not needed anymore?
Right: if you expect your job as a software developer to be effectively the same shape on a year or two you're in for a bad time.
But humans can adapt! Your goal should be to evolve with the tools that are available. In a couple of years time you should be able to produce significantly more, better code, solving more ambitious profiles and making you more valuable as a software professional.
That's how careers have always progressed: I'm a better, faster developer today than I was two years ago.
I'll worry for my career when I meet a company that has a software roadmap that they can feasibly complete.
TypeScript won over the alternatives, exactly because it is only a type checker, and not a new language.
Granted they initially weren't down that path, but they course corrected it on time, and not much people use stuff like enums in new code.
> get dinged because you couldn’t remember whether the protagonist put on an otherwise irrelevant blue sweater or red jacket
This sounds like a bad quiz, unless the story was set in e.g. the American revolution.
There's an uncanny element to the writing here, but my bigger thing is that it's presenting a sort linear progression to stages of life and startup operating, and saying 36-42 are strong ages for doing startup work, but 42 is the last of those years and 51 is past it: no? An unsupported claim? There are ways in which it is much harder to do a startup at 36 than 51.
It seems clear why 20-somethings have advantages, but extrapolating that out is I think a mistake.
I also think subheds like "Naive Conviction" and "Capitalized Execution" and "Durable Craft" are going to set people off, and as a bit of writing advice I'd avoid them, along with constructions like "It's not X. It's Y." or "X isn't Z. Y is." It's also kind of not-great writing? It starts to sound like something written for Bill Shatner to read.
I gave nearly 100 talks this calendar year ... most are repeats where I'm invited to give a talk that people have seen elsewhere. There were about 25 different talks.
Some of the advice given in this post is universal, some is very, very specific and should be taken with a huge fistful of salt.
So assess it for yourself. Does it feel like it applies to you? Then adopt it.
Does it feel odd, alien, or simply wrong? Don't dismiss it immediately. Give it some attention, try to understand why the author is suggesting it, then decide whether or not to give it a go.
HyperCard was foundational for me. Not just for programming, but incidentally, for public speaking. As a kid in jr high school, I took part in a program where you’d give presentations about programs you wrote. My very first ones were HyperCard stacks.
In the D compiler, I realized that while loops could be rewritten as for loops, and so implemented that. The for loops are then rewritten using goto's. This makes the IR a list of expression trees connected by goto's. This data structure makes Data Flow Analysis fairly simple.
An early function inliner I implemented by inlining the IR. When I wrote the D front end, I attempted to do this in the front end. This turned out to be a significantly more complicated problem, and in the end not worth it.
The difficulty with the IR versions is, for error messages, it is impractical to try and issue error messages in the context of the original parse trees. I.e. it's the ancient "turn the hamburger into a cow" problem.
He got kicked out of his religion for blasphemy. While he says “God” in a literal sense, his definition of such is certainly not in line with what most people consider to be God.
This game—who hurt whom first—doesn’t work outside the new world. It particularly fails in parts of the world that were prehistorically settled.
The uutils project has found bugs in upstream, added extra tests, and clarified behavior. It’s helped both projects improve.
Their stock price recently passed the split adjusted highs of 2000.
I wish, plenty of SaaS their main query API is GraphQL.
> The main problem GraphQL tries to solve is overfetching.
My issue with this article is that, as someone who is a GraphQL fan, that is far from what I see as its primary benefit, and so the rest of the article feels like a strawman to me.
TBH I see the biggest benefits of GraphQL are that it (a) forces a much tighter contract around endpoint and object definition with its type system, and (b) schema evolution is much easier than in other API tech.
For the first point, the entire ecosystem guarantees that when a server receives an input object, that object will conform to the type, and similarly, a client receiving a return object is guaranteed to conform to the endpoint response type. Coupled with custom scalar types (e.g. "phone number" types, "email address" types), this can eliminate a whole class of bugs and security issues. Yes, other API tech does something similar, but I find the guarantees are far less "guaranteed" and it's much easier to have errors slip through. Like GraphQL always prunes return objects to just the fields requested, which most other API tech doesn't do, and this can be a really nice security benefit.
When it comes to schema evolution, I've found that adding new fields and deprecating old ones, and especially that new clients only ever have to be concerned with the new fields, is a huge benefit. Again, other API tech allows you to do something like this, but it's much less standardized and requires a lot more work and cognitive load on both the server and client devs.
If we'd had blogs back in the 1980s, someone would have written a post that sounded just like this, but about databases. People really did talk this way about "databases". There were people who were afraid of them.
I feel the same way about Junie.
I have an "image sorter" that sucks in images from image galleries into tagging system with semantic web capabilities "Character:Mona -> Copyright:Genshin_Impact" and ML capabilities (it learns to tag the same way you do)
Gen 1 of it was cued by a bookmarklet to have a webcrawler pull the gallery HTML and the images. I started having Cloudflare problems so Gen 2 worked by saving complete pages and importing the directories, that had problems so I was looking at a Gen 3 using a bookmarklet to fetch and POST the images out of the browser into the server so I tell Junie my plan and it tells me I'll have CORS trouble and "Would you consider making a browser extension?"
Well I had considered that but was intimidated at the prospect, figured I'd probably have to carve out an uninterrupted weekend to study browser extensions, kick my son out of the house to go busk with his guitar instead of playing upstairs (the emotional/social bit is important) and even then have a high chance of not really getting it done and then end up taking another month to get an uninterrupted weekend. I told Junie "I've got no idea how you do that, don't you need a build system, don't you need to sign it?" and it said "No, you can just make the manifest file and a JS file and do a temporary install" so I say "make it so" and in 20 minutes I have the browser extension.
It still isn't working end-to-end, but I'm now debugging it and ought to be able to get it working in a weekend with interruptions even if I didn't get any more AI help.
JustHTML https://github.com/EmilStenstrom/justhtml is a neat new Python library - it implements a compliant HTML5 parser in ~3,000 lines of code that passes the full existing 9,200 test HTML5 conformance suite.
Emil Stenström wrote it with a variety of coding agent tools over the course of a couple of months. It's a really interesting case study in using coding agents to take on a very challenging project, taking advantage of their ability to iterate against existing tests.
I wrote a bit more about it here: https://simonwillison.net/2025/Dec/14/justhtml/
It’s horrifying to peak into industries you’re unfamiliar with and see what’s going on. It’s like lifting up the rotting log on your property. You just want to quickly put it back down.
WUSTL has more administrators than students:
> Academic staff 4,551 (2024)
> Administrative staff 17,979 (2024)
> Students 16,399 (fall 2024)
University of Munich, a prestigious university in German, has only 8,200 administrators for 54,000 students. So less than half the administrators for more than triple the number of students.
The University of Munich has a budget of 800 million Euro. Excluding the medical school, WUSTL has a budget over $4 billion.
My hypothesis for both this and most of the things stated in the article: the EEG machine itself is picking up the fluctuations in its EM environment.
I mean, what's more likely - that the binaural beats retune brain, or that someone forgot that any straight-ish piece of wire is a radio antenna, and the signal being seen comes straight from headphones? Using pneumatic headphones would make it go away too.
Evidence for claim #2? The sectors where the Baumol effect has been most painful (housing, childcare, education, with the exception of healthcare) are ones that have much higher levels of competition and distribution than areas where prices have rapidly dropped. Construction Physics, for example, did an analysis [1] that showed that the top multifamily housing developer has 2% marketshare; the top residential housing developer (DR Horton) has 8.4% and subs out almost all the work, and the top 4 together have only 20% of the market. Compare with tech markets like browsers, search engines, or operating systems where the top firm alone often has 80% market share.
[1] https://www.construction-physics.com/p/why-are-there-so-few-...
I've heard AI coding described as "It makes the first 80% fast, and the last 20% impossible."
...which makes it a great fit for executives that live by the 80/20 rule and just choose not to do the last 20%.
To be expected, it had little value against VSCode and all its forks, or the hardcore users with their custom Emacs and vi forks.
Or I don't know, just use C++ lambdas instead?
Wordpress is nice, but not on the same league as Sitecore, AEM, Optimizely, Dynamics, and many other enterprise class CMS.
I guess those belong to the remaining 25%.
Not really, there are tons of software written in C that should have never came close to a C compiler in first place.
Even the UNIX and C authors agree with this, given what they worked on, in OS design and programming languages post C.
Something that many glance over, what have their UNIX idols actually done after UNIX System V.
Thus the first question already is that kernel code or not, and if not, why should it be written in C versus any other safer alternative.
Actual reasons like performance numbers required by the application, or existing SDK availability, not the "because I like it".
That's interesting.
A problem I have with Brian Merchant's reporting on this is that he put out a call for stories from people who have lost their jobs to AI and so that's what he got.
What's missing is a clear indication of the size of this problems. Are there a small number of copywriters who have been affected in this way or is it endemic to the industry as a whole?
I'd love to see larger scale data on this. As far as I can tell (from a quick ChatGPT search session) freelance copywriting jobs are difficult to track because there isn't a single US labor statistic that covers that category.
This somewhat reminds me of the old MakeProcInstance mechanism in Win16, which was quickly rendered obsolete by someone who made an important realisation: https://www.geary.com/fixds.html
Another seemingly underutilised feature closely related to {Get,Set}WindowLong is cbClsExtra/cbWndExtra which lets you allocate additional data associated with a window, and store whatever you want there. The indices to the GWL/SWL function are quite revealing of how this mechanism works:
https://learn.microsoft.com/en-us/windows/win32/api/winuser/...
Any assumption made in order to ship a product on time will eventually be found to have been incorrect and will cause 10x the cost that it would have taken to properly design the thing in the first place. The problem is that if you do that proper design you never survive to the stage where have that problem.
I think the solution to that is to continuously refactor, and to spell out very clearly what your assumptions are when you are writing the code (which is an excellent use for comments).
WASI is basically CORBA, and DCOM, PDO for newer generations.
Or if you prefer the bytecode based evolution of them, RMI and .NET Remoting.
I don't see it going that far.
The WebAssembly development experience on the browser mostly still sucks, especially the debugging part, and on the server it is another yet another bytecode.
Finally, there is hardly any benefit over OS processes, talking over JSON-RPC (aka how REST gets mostly used), GraphQL, gRPC, or plain traditional OS IPC.
I was glad to see this because I had the same exact question, but then I realized that given this machine seems to be designed for manually loading the water into it, a dedicated "rinse cycle" probably wouldn't help much because it's probably easier to just manually rinse the clothes after.
I would rather see more PyPy love, but here we are.
Your insistence that LLMs are not useful tools is difficult for me to empathize with as someone who has been using them successfully as useful tools for several years - and sharing in great detail how I am using them.
https://simonwillison.net/2025/Dec/10/html-tools/ is the 37th post in my series about this: https://simonwillison.net/series/using-llms/
https://simonwillison.net/2025/Mar/11/using-llms-for-code/ is probably still my most useful of those.
I know you absolutely hate being told you're holding them wrong... but you're holding them wrong.
They're not nearly as unpredictable as you appear to think they are.
One of us is misleading people here, and I don't think it's me.
> If your software solves a tightly connected business problem, microservices probably aren't the right fit.
If your software solves a single business problem, it probably belongs in a single (still micro!) service under the theory underlying microservices, in which the "micro" is defined in business terms.
If you are building services at a lower level than that, they aren't microservices (they may be nanoservices.)
This paper starts to go downhill around "The easier-than-expected problem of consciousness".
The Meta paper [1] is much more useful. They claim to be reading out what someone is seeing, in a rather approximate way. The sensing is improving. One project was able to sense magnetic fields at 13 points at 1KHZ using a custom helmet fitted with sensors.[2] The technology is still in the early stages, but they got rid of the high vacuum and cyrogenics needed for SQUID sensors. Progress.
This currently has fewer data points than functional MRI, but more bandwith. fMRI, after all, is measuring blood flow. It's like trying to figure out what an IC is doing by watching its infra-red heat emissions. "Look, the FPU is working hard now."
That paper is a few years old. What's been going on since?
[1] https://ai.meta.com/blog/brain-ai-image-decoding-meg-magneto...
Doesn't x32 only have four registers available in the calling convention, AX-DX?
For those used to traditional language syntax, anything in the APL family is like Chinese to someone who only knows Latin-family natural languages. It's always amusing to see all the reaction comments when APL/J/K is posted here.
There are lots of little hand-crank washing machines on Alibaba and Amazon. Most are plastic and rather fragile looking. Many seem to use the mechanism of salad spinners. The Sears WonderWash seems to be popular.
Zeroing memory is trickier than that, if you want to do it in Rust you should use https://crates.io/crates/zeroize
I backed this project: https://www.crowdsupply.com/modos-tech/modos-paper-monitor on Crowd Supply to see how close they can come to a "monitor" experience with an e-paper display.
I don't know if it even needs to be intentional. On mobile, it's incredibly easy to downvote when you mean to upvote.
Another commenter asked how ventilation is supposed to work -- it does say "air ventilation vents" [1], though it's extremely unclear from photos where those are or how they work, and how it's compatible with not drowning when you get dumped into the sea and they're on the bottom.
But I'm also wondering about where fresh water is coming from and where waste products go. It talks about a water storage bladder/tank, but surely that's intended for weeks max, not a year?
Kudos to those who performed recovery and snatched back from the sands of time.
While I think that's a bit harsh :-) the sentiment of "if you have these problems, perhaps you don't understand systems architecture" is kind of spot on. I have heard people scoff at a bunch of "dead legacy code" in the Windows APIs (as an example) without understanding the challenge of moving millions of machines, each at different places in the evolution timeline, through to the next step in the timeline.
To use an example from the article, there was this statement: "The split to separate repos allowed us to isolate the destination test suites easily. This isolation allowed the development team to move quickly when maintaining destinations."
This is architecture bleed through. The format produced by Twilio "should" be the canonical form, which is submitted the adapter which mangles it to the "destination" form. Great, that transformation is expressible semantically in a language that takes the canonical form and spits out the special form. Changes to the transformation expression should not "bleed through" to other destinations, and changes to the canonical form should be backwards compatible to prevent bleed through of changes in the source from impacting the destination. At all times, if something worked before, it should continue to work without touching it because the architecture boundaries are robust.
Being able to work with a team that understood this was common "in the old days" when people were working on an operating system. The operating system would evolve (new features, new devices, new capabilities) but because there was a moat between the OS and applications, people understood that they had to architect things so that the OS changes would not cause applications that currently worked to stop working.
I don't judge Twilio for not doing robust architecture, I was astonished when I went to work of Google how lazy everyone got when the entire system is under their control (like there are no third party apps running in the fleet). The was a persistent theme of some bright person "deciding" to completely change some interface and Wham! every other group at Google had to stop what they were doing and move their code to the new thing. There was a particularly poor 'mandate' on a new version of their RPC while I was there. As Twilio notes, that can make things untenable.
> a perfectly ambiguous mix of truth and FUD
Congrats on Fil-C reaching heisentroll levels!
"Dixie can't meaningfully grow as a person. All that he ever will be is burned onto that cart;"
"Do me a favor, boy. This scam of yours, when it's over, you erase this god-damned thing."
Things like this and other custom "Windows distros" are a sign that MS would have no problem selling a version of Windows that's nothing more than a base OS, but clearly they would rather take the user-hostile route.
If you are really concerned you should do this and then report back. Otherwise it is just a mild form of concern trolling.
"Heck, why isn't there a cornucopia of new apps, even trivial ones?"
There is. We had to basically create a new category for them on /r/golang because there was a quite distinct step change near the beginning of this year where suddenly over half the posts to the subreddit were "I asked my AI to put something together, here's a repo with 4 commits, 3000 lines of code, and an AI-generated README.md. It compiles and I may have even used it once or twice." It toned down a bit but it's still half-a-dozen posts a day like that on average.
Some of them are at least useful in principle. Some of them are the same sorts of things you'd see twice a month, only now we can see them twice a week if not twice a day. The problem wasn't necessarily the utility or the lack thereof, it was simply the flood of them. It completely disturbed the balance of the subreddit.
To the extent that you haven't heard about these, I'd observe that the world already had more apps than you could possibly have ever heard about and the bottleneck was already marketing rather than production. AIs have presumably not successfully done much about helping people market their creations.
If the government is using the same fake data as the rest of the Internet you want to be using that fake data too. You want to be precise, not accurate. If the FBI records your endpoint as Iran and you say "I wasn't actually sending traffic from Iran, where there are sanctions, I was sending from London but my VPN provider lied on their WHOIS record", you will be in just as much trouble as if you were actually sending data from Iran.
Wait what? I've been wondering why people have been fussing over supply chain vulnerabilities, but I thought they mostly meant "we don't want to get unlucky and upgrade, merge the PR, test, and build the container before the malicious commit is pushed".
Who doesn't use lockfiles? Aren't they the default everywhere now? I really thought npm uses them by default.
I think you did a great job of bringing fairly nuanced problems into perspective for a lot of people who take their interactions with their phone/computer/tablet for granted. That is a great skill!
I think an fertile area for investigation would also be 'task specific' interactions. In XDE[1], the thing that got Steve Jobs all excited, the interaction models are different if you're writing code, debugging code, or running an application. There are key things that always work the same way (cut/paste for example) but other things that change based on context.
And echoing some of the sentiment I've read here as well, consistency is a bigger win for the end user than form. By that I mean even a crappy UX is okay if it is consistent in how its crappy. Heard a great talk about Nintendo's design of the 'Mario world' games and how the secret sauce was that Mario physics are consistent, so as a game player if you knew how to use the game mechanics to do one thing, you can guess how to use them to do another thing you've not yet done. Similarly with UX, if the mechanics are consistent then they give you a stepping off point for doing a new thing you haven't done but using mechanics you are already familiar with.
[1] Xerox Development Environment -- This was the environment everyone at Xerox Business Systems used when working on the Xerox Star desktop publishing workstation.
Find problems to solve with code, and write code to solve those problems. You’re building muscle strength in the ability to rapidly pattern match to potential reference code paths.
Indeed, the Salt River Project nuclear generator uses reclaimed sewage water for cooling.
Plenty of people were pointing out that voting machines had poor security for about two decades. Even before that, there was the mechanically disastrous Bush vs Gore Florida ballot.
America being what it is, with endless Voting Rights Act lawsuits required to keep the southern states running vaguely fair elections, it was impossible to get a bipartisan consensus that elections should actually be fair. And so the system deteriorates.
Is there any real-life situation in which this matters, though?
If you're picking a country so you can access a Netflix show that geolimits to that country, but Netflix is also using this same faulty list... then you still get to watch your show.
If you're picking a country for latency reasons, you're still getting a real location "close enough". Plus latency is affected by tons of things such as VPN server saturation, so exact geography isn't always what matters most anyways.
And if your main interest is privacy from your ISP or local WiFi network, then any location will do.
I'm trying to think if there's ever a legal reason why e.g. a political dissident would need to control the precise country their traffic exited from, but I'm struggling. If you need to make sure a particular government can't de-anonymize your traffic, it seems like the legal domicile of the VPN provider is what matters most, and whether the government you're worried about has subpoena power over them. Not where the exit node is.
Am I missing anything?
I mean, obviously truth in advertising is important. I'm just wondering if there's any actual harm here, or if this is ultimately nothing more than a curiosity.
There is better than that.
> In the Trek universe, LCARS wasn't getting continuous UI updates
In the Trek universe, LCARS was continuously generating UI updates for each user, because AI coding had reached the point that it no longer needs specific direction, and it responds autonomously to needs the system itself identifies.
I use Junie to get tasks done all the time. For instance I had two navigation bars in an application which had different styling and I told it make the second one look like the first and... it made a really nice patch. Also if I don't understand how to use some open source dependency I check the project out and ask Junie questions about it like "How do I do X?" or "How does setting prop Y have the effect of Z?" and frequently I get the right answer right away. Sometimes I describe a bug in my code and ask if it can figure it out and often it does, ask for a fix and often get great results.
I have a React application where the testing situation is FUBAR, we are stuck on an old version of React where tests like enzyme that really run react are unworkable because the test framework can never know that React is done rendering -- working with Junie I developed a style of true unit tests for class components (still got 'em) that tests tricky methods in isolation. I have a test file which is well documented explaining the situation around tests and ask "Can we make some tests for A like the tests in B.test.js, how would you do that?" and if I like the plan I say "make it so!" and it does... frankly I would not be writing tests if I didn't have that help. It would also be possible to mock useState() and company and might do that someday... It doesn't bother me so much that the tests are too tightly coupled because I can tell Junie to fix or replace the tests if I run into trouble.
For me the key things are: (1) understanding from a project management perspective how to cut out little tasks and questions, (2) understanding enough coding to know if it is on the right track (my non-technical boss has tried vibe coding and gets nowhere), (3) accepting that it works sometimes and sometimes it doesn't, and (4) recognizing context poisoning -- sometimes you ask it to do something and it gets it 95% right and you can tell it to fix the last bit and it is golden, other times it argues or goes in circles or introduces bugs faster than it fixes them and as quickly as you can you recognize that is going on and start a new session and mix up your approach.
There's a counterintuitive pricing aspect of Opus-sized LLMs in that they're so much smarter that in some cases, it can solve the problem faster and with much fewer tokens that it can end up being cheaper.
> Are you saying “alas for citizens of the US who see things in competitive nationalist terms”?
He’s saying it as a realist.
China is building the equivalent to America’s sanctions power in their battery dominance. In an electrified economy, shutting off battery and rare earths access isn’t as acutely calamitous as an oil embargo, but it’s similarly shocking as sanctions and tariffs.
Hire Americans first before importing more immigrant workers. When America reaches full employment (which we are a long way from as of this comment), feel free to tap other countries. If you are in US public office, and advocating for or fighting for more immigration while American workers are desperate for work, you should be run out of office, at a minimum. Your duty is to the citizens you represent, not corporations, not foreign workers, and certainly not a small population of the wealthy who receive the gains from increased shareholder value from imported immigrant workers.
Because Tesla wants their ~25-30% gross margin on Powerwalls.
Your mom was especially courageous to do this as a child!
> it's a freaking mess to deal with recycling and often times, garbage we don't know what to do
I love that this is followed by “so go nuclear!”
no paywall: https://archive.ph/jvNK7
> I'm wondering if altruism is in decline
Altruism and empathy, by name, are targets of derogation by a major political movement in the US, at least. So, yeah, absolutely.
> I sometimes even get the feeling that altruism is seen as a weakness these days.
This is fairly explicitly the case, yes.
Respectfully, this has become a message board canard. Go is absolutely a memory safe language. The problem is that "memory safe", in its most common usage, is a term of art, meaning "resilient against memory corruption exploits stemming from bounds checking, pointer provenance, uninitialized variables, type confusion and memory lifecycle issues". To say that Go isn't memory safe under that definition is a "big if true" claim, as it implies that many other mainstream languages commonly regarded as memory safe aren't.
Since "safety" is an encompassing term, it's easy to find more rigorous definitions of the term that Go would flunk; for instance, it relies on explicit synchronization for shared memory variables. People aren't wrong for calling out that other languages have stronger correctness stories, especially regarding concurrency. But they are wrong for extending those claims to "Go isn't memory safe".
I'm pretty sure Burrough's venerable OS, so user hostile that it inspired a brilliant movie villain, is not a fad.
Oh... You mean that other MCP... Oh well...
That's absolutely amazing.
On one of my many trips to Europe, I was wandering around the downtown area, and having walked a great deal sat down on a park bench to rest.
Two very beautiful young ladies came up to me, and said you look like you need a hug. Instantly my spidey sense went on red alert, as I figured these two were pickpockets or scammers or ladies of the evening, since I was much too old to be of interest to them, and no woman has ever remarked that I was handsome. I asked them what they were doing, and they said they were just doing a project spreading kindness.
So I said ok and one of them gave me a truly wonderful hug, and I said thank you and they went on their way.
All I can say is "wow".
Somebody did this back in the DOS era. The program was sometimes called "the crooked accountant's spreadsheet", because you could start with the outputs you wanted and get the input numbers adjusted to fit.
Anyone remember?
> will cause vastly more moderation and the disappearance of many or most comment sections
We really don’t know this.
High voltage transmission lines are really quite efficient, and concentrating generation is usually the right choice.
That said, it doesn't make sense to have just a single place for the entire country, as there are multiple grids in the US (primarily East, West, and Texas), and with very long transmission you can get into phase issues.
The Claude MD is like the documentation you hand to a new engineer on your team that explains details about your code that they wouldn't otherwise know. It's not bad to need one.
I'd be interested in seeing some of the neuroscience research, because the narrative spun by this post - that the primary reason for a change to the "zero fucks to give" attitude is hormonal and biological - seems weak to me.
I'm also someone (a man FWIW, as the article was mainly focused on the experience of women) who reached an abrupt mental shift in my late forties. And sure, there could be some underlying biological shift I'm not conscious of, but a lot of it is simply that "pretending" at this stage no longer serves a useful purpose, and most people become aware of it at this stage of life.
I love the saying "over the hill" because it gives me a good visual of what's going on. When you're young, and looking up "from the bottom of the hill", you can fantasize about all sorts of possibilities and outcomes that can happen to you. As you age, though, more and more avenues get cut off - you're not going to be the sports, movie or rock star you dreamed of, you're not going to invent a cure for cancer, you're not going to become a billionaire, etc. When you're "over the hill", you can see pretty much into the valley below, and you have to be realistic about the possibilities. I think a lot of people may switch their "people pleasing" ways because they stop fantasizing about the benefits that may happen by "keeping all doors" open. You see you no longer have infinite time left, and you decide where to spend it more wisely. It's like the famous Confucius quote "Every man has two lives, and the second begins when he realizes he has only one."
One reason I didn't like this essay is that it seems to be trying, ironically, to explain this change in perspective/behavior, and the negative response that can come from it, especially for women, as a "biological/hormonal consequence". The whole point of having "no fucks left to give" is that you don't care how others respond to your less pleasing attitude. If you're still trying to explain it so you can understand (or try to ignore) other's negative responses, I feel like you've missed the point.
Solar and storage is the cheapest form of power now. Prices for both will continue to decline.
Battery storage hits $65/MWh – a tipping point for solar - https://news.ycombinator.com/item?id=46251705 - December 2025
Plenty of huge businesses keep all their critical data in the cloud. If they were banned from Microsoft 365 they would instantly go out of business.
Its pretty much the same thing as in every previous age, where not having a community of experience and the supporting materials they produce has been a disadvantage to early adopters of a new language, so the people that used it first were people with a particular need that it seemed to address that offset that for them, or that had a particular interest in being in the vanguard.
And those people are the people that develop the body of material that later people (and now LLMs) learn from.
I paid $5,000 in 2007 for the best TV you could buy at the time: Pioneer Kuro Elite 50” 1080p plasma. I’m still using it as my only TV. For the past 5 years I’ve been looking to upgrade/replace it with a state-of-the-art top-of-the-line 4k OLED/micro-OLED/quantum dot/etc. — but when I go to look at current screens, none match the almost 3D depth and beauty of my plasma display. So, I’m patiently waiting for my 18-year-old TV to stop working — but much to my amazement it’s never ever needed service! Edit: Smart TVs appeared in 2007-8; mine did not offer this “feature.”
> I'm not surprised to see these horror stories
I am! To the point that I don’t believe it!
You’re running an agentic AI and can parse through the logs, but you can’t sandbox or keep backups?
Like, I’ve given Copilot permission to fuck with my admin panel and it proceeded to bill thousands of dollars creating heat maps of the density of structures in Milwaukee, buying subscriptions to SAP Joule and ArcGIS for Teams, and generating terabytes of nonsense maps, ballistic paths and “"an architectural sketch of a massive bird cage the size of Milpitas, California (approximately 13 square miles)” resembling “a futuristic aviary city with large domes, interconnected sky bridges, perches, and naturalistic environments like forests, lakes, and cliffs inside.”
But support immediately refunded everything, I had backups and the whole thing was hilarious if irritating.