What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
Yes. I once wanted C unions limited to fully mapped type conversions, where any bit pattern in either type is a valid bit pattern in the other. Then you can map two "char" to "int". Even "float". But pointer types must match exactly.
If you want disjoint types, something like Pascal's discriminated variants or Rust's enums is the way to go. It's embarrassing that C never had this.
Many bad design decision in C come from the fact that originally, separate compilation was really dumb, so the compiler would fit in small machines.
I'm not really very up to speed on this, can someone explain how the strait is actually closed? Are the Iranians threatening to sink any ships that pass by, or what? How come any ships don't turn their transponders off and try to make a run for it?
My dorm room was next door to Hal Finney. He was a freakin genius at every intellectual endeavor he bothered to try. My fellow students and I were in awe of him.
But you had to get to know him to realize what he was. To most people, he was just a regular guy, easy going, friendly, always willing to help.
He was also a libertarian, and the concept of bitcoin must have been very appealing to him.
And inventing "Satoshi" as the front man is just the prankish thing he'd do, as he had quite a sense of humor.
I regret not getting to know him better, though I don't think he found me very interesting.
My money's on Hal.
Changing the title was a good call.
The article has a good take on the "lie" problem. We know about the hallucination problem, which remains serious. The "lie" problem mentioned is that if you ask an LLM why it said or did something, it has no information of how it got a result. So it processes the "why" as a new query, and produces a plausible explanation. Since that explanation is created without reference to the internals of how the previous query was processed, it may be totally wrong. That seems to be the type of "lie" the author is worried about in this essay.
(Yes, humans do that too.)
>what is the point of links/edges when the llm can figure out the relations by itself
Making it work less, faster, and saving tokens. Duh!
Mythos is a news article. This is an actual model you can use.
We have a EU dev we tried to have submit a GDPR request for human review on something on Facebook.
There’s no apparent mechanism to do so. Support was clueless. The privacy email address responded weeks later with “not out department”.
Things which are relatively standard tend to get good generic support: Ethernet devices will generally be USB/CDC/ECM or RNDIS, for example. That may Just Work (tm) if it has the right descriptors.
The userland approach is much more useful for weird or custom devices. In particular, on Windows you can do one of these user space "drivers" without having to bother with driver signing, and if you use libusb it will be portable too.
(I maintain a small USB DFU based tool for work)
Stripe's pretty good at using other signals to block this sort of thing.
I thought Madoka looks more like
https://safebooru.org/index.php?page=post&s=list&tags=mahou_...
It was a little scary though to click on a link and get a sh file.
For my own web-based RSS reader I made no effort to open stories in an <iframe> or anything like that but I just show the text in the RSS feed and if I want to see the article I click on a link an it opens a new browser tab.
I've been arguing with people since 1994 that staying within the bounds of the web gave you more freedom then it takes away. My RSS reader works w/ my iPad and my VR headset.
For a while I have been thinking people haven't demanded more transclusion capability from the WWW, like you ought to be able to merge the DOM from a web site into another page. I did two projects in the 2010s, one to see what was possible in terms of transclusion of pure-HTML pages and another "de-enshittification" project which would re-render JS-based pages as the DOM level the way Google's web crawler (or maybe archive.is) but a week's spike prototype on that one convinced me that it made latency, arguably the core problem of enshittification, much much worse. On the other hand for an RSS reader you could do the work up front.
Right. From the article:
Through acoustic testing, the research team identified a narrow frequency band – a “safety gap” – capable of penetrating ANC headphone filters. This range lies between 750 and 780 Hz.
Is there a standard specifying this "safety band"? Is whatever Apple does for AirPods a de-facto standard?
I am just blown away with how much you can do with the ESP32! I used to be an AVR8 fanbois but being able to make music synthesizers, display controllers and things like that is so much fun!
Unless you're aware that such powerful commands are something you need once in a blue moon, and then you're grateful that the tool is flexible enough to allow them in the first place.
Git may be sharp and unwieldy, but it's also one of the decreasing amount of tools we still use - the trend of turning tools into toys consumed the regular user market and is eating into tech software as well.
Not one comment praising the existence of this site. Remarkable.
It would not surprise me if these actions are coming at the requests of governments. Strong encryption is one of the few things that challenges their monopoly on information; they have a very strong incentive to apply political pressure to the maintainers of these projects to, well, stop maintaining the projects. We've seen this in overt actions that the EU takes; in more covert actions that the U.S. government is suspected of taking; and in the news headlines about third-world dictatorships that just shut off the Internet. Tech companies are perhaps the most convenient leverage point for these actions.
More regulation won't help here, because the regulation-maker is itself the hostile party.
What would help is full control over the supply chain. Hardware that you own, free and open-source operating systems where no single person is the bottleneck to distribution, and free software that again has no single person who is a failure point and no way to control its distribution.
The lights that are used in experiments where you perturb people's circadian rhythm a. la The Geometry of Biological Time
https://lab.rockefeller.edu/cohenje/assets/file/098CohenBook... (review)
are really bright, as our the lights used for treating SAD. I always thought the fear over screens was the kind of bogus thing people wanted to believe in.
I don't blame you for this initial reaction, which would have been mine too had I not known who the author was. I don't mean that I automatically trust anything published by the reporter who busted Theranos (and won two Pulitzers for other major investigations). But I do mean that if John Carreyrou and his editors decided to publish something this long, that means they (and they're lawyers) are willing to die on this hill, no matter how meandering the first paragraphs of his 1st-person narrative.
Since the story doesn't end with: "And then Adam Back bowed his head and said, 'You have found me, Satoshi'", I'm guessing they preferred to go for the softer "how we did this story" first-person narrative. There is no explicit smoking gun, like an official document or eyewitness who asserts Satoshi's identity. But the circumstantial and technical evidence is quite thorough, to the point where the most likeliest conclusions are:
1. Adam Back is Satoshi
2. Satoshi is someone who is either a close friend or frenemy of Back, and deliberately chose to leave a obfuscated trail that correlates with Back's persona and personal timeline.
I make holiday light shows with an open source program called XLights[0]. I'm sure you've seen the videos[1] of what people[2] can do. Usually the top comment is "man that is cool but I wouldn't want to be their neighbor!" followed by "my neighbors love my light shows".
Creating the sequences is time consuming, and lot of people end up buying them or sharing them, but those are rarely as good as the ones you make for yourself.
Some folks have dabbled with using AI to create the sequences. I think the biggest issues are lack of training data and it's a very visual art, so there needs to be a better feedback between the text representation and the visual manifestation.
So if you're into using AI to make physical world things better, that would be a good place to look!
And the guy next to him is just staring at his phone, probably thinking, "I'm not even gonna ask".
Although if it were me I'd probably annoy the heck out of him asking why he had a Wii on the airplane!
How niche is retrocomputing?
I absolutely love my ancient machines, and I use them to explore period applications, much more than games.
I also love to restore and preserve them. There’s something magical about a Sun workstation Solaris 2 a Frog Design Trinitron monitor. or a Microvax running VMS and DECWindows. Or a multi-user Altair Z80. I think it’s sad a lot of software was lost and some platforms were denied the documentation that’d enable their preservation (looking at you, IBM - document the AS/400 and release old OS to hobbyists).
As you know, I deeply respect you. Not trying to argue here, just provide my own perspective:
> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?
I write things for two main reasons: I feel like I have to. I need to create things. On some level, I would write stuff down even if nobody reads it (and I do do that already, with private things.) But secondly, to get my ideas out there and try to change the world. To improve our collective understanding of things.
A lot of people read things, it changes their life, and their life is better. They may not even remember where they read these things. They don't produce citations all of the time. That's totally fine, and normal. I don't see LLMs as being any different. If I write an article about making code better, and ChatGPT trains on it, and someone, somewhere, needs help, and ChatGPT helps them? Win, as far as I'm concerned. Even if I never know that it's happened. I already do not hear from every single person who reads my writing.
I don't mean that thinks that everyone has to share my perspective. It's just my own.
I've been doing photography for a long time but over the last few years had phases where I got bored of it and tried something new.
I had a long time when I was bored and carried the camera in my pack but never took any pictures, then one day I looked out at the sports center out my window and decided to start shooting sports.
Posting photos to socials I found flower photographs were popular so I take a lot of them and find ways to not get bored. (Maybe I will start focus stacking one of these days)
Since the beginning of the year I have been "going out" as a character who is a bit like a Disney cast member who gets photos like
https://mastodon.social/@UP8/116326541009492328
from people who recognize my character. Like the Disney cast member it works better when people have seen the movie so i hand out these tokens
https://mastodon.social/@UP8/116086491667959840
which spread virally around a university campus, particularly among Chinese students who recognize the huli jing and all the time I have experiences "that could only happen in a manga" when, for instance, somebody who's heard the rumors is waiting at the bus stop for me. Laugh but all my marketing KPIs have an extra zero on the right!
If only people didn't install Ask Jeeves toolbars all over the place and then asked their grandson during vacations to clean their computer.
Not only is this an insanely cool project, the writeup is great. I was hooked the whole way through. I particularly love this part:
> At this point, the system was trying to find a framebuffer driver so that the Mac OS X GUI could be shown. As indicated in the logs, WindowServer was not happy - to fix this, I’d need to write my own framebuffer driver.
I'm surprised by how well abstracted MacOS is (was). The I/O Kit abstraction layers seemed to actually do what they said. A little kudos to the NeXT developers for that.
I think there are two types of discussions, when it comes to LLMs: Some people talk about whether LLMs are "human" and some people talk about whether LLMs are "useful" (ie they perform specific cognitive tasks at least as well as humans).
Both of those aspects are called "intelligence", and thus these two groups cannot understand each other.
本当に? [1]
My take on it is that you should have a budget for thinking about AI tools and that budget should not be very much, not more than 10-20% of your time.
I mean, tools are going to evolve, I learned that Gemini can correctly answer programming questions that Copilot can't, so I changed my habits. But I do not go looking for new tools every day, I do not ask who I should be following on X to charge up my FOMO, etc.
There is a lot of talk about why some people get good results with AI coding and others don't. I think it comes down to
(1) Software development knowledge
(2) Subject matter knowledge
(3) A.I. programming knowledge
in that order That is "knowing how to prompt" is mainly about (1) and (2) and less about (3). I might even be wrong about the order of (1) and (2) but the gap between those and (3) is huge.
Let's say you want to write an application to help people do easy tax returns.
It helps to be an accountant or other tax expert because you have (2). I could get away with it because part of my (1) skills is checking out a backpack full of books at the library and learning enough about a subject to impersonate the subject matter expert. Somebody good at (3) could certainly get Claude Code to make something that looks like it could fill our your tax return for you but it won't really work.
[1] Really?
This is excellent, though if you had chosen another OS, you could have called the project Wiindows.
EDIT: Oh interesting, the final paragraph says NT has been ported, didn't know that. Sadly, no pun is mentioned in that project.
Huh? Every email marketing system I've built has bounce management built in. I mean if you don't you get turned off by your deliverability provider.
This shopping center was built on a landfill
https://www.wskg.org/regional-news/2025-08-08/binghamton-off...
when I first saw it in the 1990s it was kinda on the outs, like K-Mart was already failing (as a business) and the parking lot was visibly wavy because of subsidence. Funny the New York Pizzeria mentioned in that article is run by my relatives.
One of the reasons I'm comfortable using them as coding agents is that I can and do review every line of code they generate, and those lines of code form a gate. No LLM-bullshit can get through that gate, except in the form of lines of code, that I can examine, and even if I do let some bullshit through accidentally, the bullshit is stateless and can be extracted later if necessary just like any other line of code. Or, to put it another way, the context window doesn't come with the code, forming this huge blob of context to be carried along... the code is just the code.
That exposes me to when the models are objectively wrong and helps keep me grounded with their utility in spaces I can check them less well. One of the most important things you can put in your prompt is a request for sources, followed by you actually checking them out.
And one of the things the coding agents teach me is that you need to keep the AIs on a tight leash. What is their equivalent in other domains of them "fixing" the test to pass instead of fixing the code to pass the test? In the programming space I can run "git diff *_test.go" to ensure they didn't hack the tests when I didn't expect it. It keeps me wondering what the equivalent of that is in my non-programming questions. I have unit testing suites to verify my LLM output against. What's the equivalent in other domains? Probably some other isolated domains here and there do have some equivalents. But in general there isn't one. Things like "completely forged graphs" are completely expected but it's hard to catch this when you lack the tools or the understanding to chase down "where did this graph actually come from?".
The success with programming can't be translated naively into domains that lack the tooling programmers built up over the years, and based on how many times the AIs bang into the guardrails the tools provide I would definitely suggest large amounts of skepticism in those domains that lack those guardrails.
Myself I see legibility as the most important variable for manifesting myself-as-a-fox. In my photography work I sure love it when people recognize me in 200ms because I get results like
https://mastodon.social/@UP8/116326541009492328
Legibility helps you share reality with other people, it lets you shape reality. It enables nonconformity as much as it enables conformity. What's wrong with that?
You should probably go watch the Terminator movies.
Where the data shows people are getting caught running red lights.
Which isn't necessarily where the most incidents are.
It was always practical, and we were always gonna lose some aircraft in real combat. Especially when flying them at low altitude; we've seen footage of American aircraft definitely in MANPADS range during the search for the F-15 crew.
It looks like you got ChatGPT to write shorter sentences than usual but it this article has the structural signs of something it wrote -- it circles around and repeats itself way too much.
As for time sensitivity I'm going to say the sense of urgency is a bug and not a feature. I think sending people urgent methods about the security of their account is fundamentally bad because it trains people to apply system I thinking to security -- i mean phishing messages are often about some urgent thing having to do with security. If one could say WE ARE NEVER GOING TO SEND YOU MESSAGES ABOUT THE SECURITY OF YOUR ACCOUNT and users believed it I think those users would be more secure.
In that frame, a 45 sec timeout for an OTP is too short and that timeout should be increased even if it means increasing the entropy.
The case for speed is just... speed is good, full stop. If a response of any kind is delayed for 45 sec the user is going to think that it might never be coming and will be likely to retry, give up, kill -9, reboot the device, etc... Whatever it is.
It navigates by Brownian motion.
Yeah, "diabolical" overstates it. It isn't a wicked problem
https://en.wikipedia.org/wiki/Wicked_problem
Kinda funny but I am a fan of green LED light to supplement natural light on hot summer days. I can feel the radiant heat from LED lights on my bare skin and since the human eye is most sensitive to green light I feel the most comfortable with my LED strip set to (0,255,0)
I've been trying that prompt agains other leading models and honestly GLM-5.1's is by far the best.
See Chesterson's Fence. There are plenty of things that are wrong with the status quo but also plenty of things that are right. People can always imagine things getting worse. I can be worried about social and economic inequality but also not want to live in Lenin's Russia or Mao's China.
I see it is part of the general problem of the culture industry.
Back in the 1980s the average young "science fiction fan" had never read Heinlein or Asimov or Niven or LeGunn or Anderson or Smith or Robinson or Pohl. Instead of reading 20 books by 20 different authors, they read Hitchhiker's guide to the galaxy 20 times.
In the 2000s it was the same with fantasy and Harry Potter. Zillions and zillions of fantasy books but everybody just had to read the same one over and over.
And of course these folks are always negative and not positive, they aren't going to talk about what you should move on to, or even that you should read something like The Terraformers which (1) is pretty good no matter how you slice it and (2) represents an NB point of view.
Contrast that to a healthy culture in Japan where 4-panel comics that have come up in the last four years have had visual adaptations, I'd have no trouble naming 20 fantasy titles that have had visual adaptations, etc.
One thing everybody agrees in the US is that is all about gatekeeping, gatekeeping, gatekeeping, and more gatekeeping. Thing is you can't gatekeep your way to a successful culture industry, like you have to let creative people produce something.
Human brain, working quite alright.
Microsoft's management has always behaved as if it was a mistake to have added F# into Visual Studio 2010, and being stuck finding a purpose for it.
Note that most of its development is still by the open source community and its tooling is an outsider for Visual Studio, where everything else is shared between Visual Basic and C#.
With the official deprecation of VB, and C++/CLI, even though the community keeps going with F#, CLR has changed meaning to C# Language Runtime, for all practical purposes.
Also UWP never officially supported F#, although you could get it running with some hacks.
Similarly with ongoing Native AOT, there are some F# features that break under AOT and might never be rewritten.
A lost opportunity indeed.
I know it's not what people want to hear but my response to a lot of the comments here is just a general, I agree, it's time to stop using Windows.
They won't let you secure your drive the way you want. They won't let you secure your network the way you want (per the top-level comment about Wireguard). In so doing they are demonstrating not just that they can stop you from running these particular programs but that they are very likely going to exert this control on the entire product category going forward, and I see little reason to believe they will stop there. These are not minor issues; these are fundamental to the safety, security, and functionality of your machine. This indicates that Microsoft will continue to compromise the safety, security, and functionality of your machine going forward to their benefit as they see fit. This is intolerable for many, many use cases.
I think it is becoming clear that Microsoft no longer considers Windows users to be their customers any more. Despite the fact that people do in fact pay for Windows, Microsoft has shifted from largely supporting their customers to out-and-out exploiting their customers. (Granted a certain amount of exploitation has been around for a long time, but things like the best backwards compatibility in the industry showed their support, as well.)
I suspect this is the result of a lot of internal changes (not one big one) but I also see no particular reason at the moment to expect this to change. To my eyes both the first and second derivative is heading in the direction of more exploitation. More treating users like a cattle field and less like customers. When new features or work is being proposed at Microsoft, it is clear that it is being analyzed entirely in terms of how it can benefit Microsoft and users are not at the table.
No amount of wishing this wasn't so is going to change anything. No amount of complaining about how hard it is to get off of Windows is going to change anything; indeed at this point you're just signalling to Microsoft that they are correct and they can treat you this way and there's nothing you will do about it for a long time.
Our bike lanes are just a line on the sidewalk and pedestrians routinely walk on them, cross the sidewalk in them without looking, let their toddlers/pets run into them, etc. Also, nobody realizes that a bicycle bell means "someone is coming", so they just ignore it as background noise.
I had to mount an airhorn onto my bike. At least people listen to that, though it's so loud I only use it in emergencies.
This is my favorite HN comment of them all.
I think we know Gates had dinner with Epstein a few times, we don't know that he was involved with Epstein's wrongdoing or that he knew about it but of course we don't know that he wasn't or didn't. I think anybody involved with Epstein, however, demonstrated that they were bad judge of character and could have bad judgement in general.
I am not so offended by Bill Gates having affairs but I am offended with him having affairs with Microsoft employees.
What is funny about is that the common themes of "unpleasant environment" and "low trust" are obscured by left/right bickering. In fact the one thing they agree is that you can't trust other people and you need to spend your own or somebody else's money as a consequence.
Crime is a social determinant of health
https://pmc.ncbi.nlm.nih.gov/articles/PMC9933800/
but you won't see it in an article like that. pro-crime policies are racist because people of color are disproportionately affected by crime.
OK then, what is the opposite of this, the adhoc union?
"Let me see if the secrets are specified. echo $SECRETS"
You're not actually allowed to avoid this by having multiple accounts, that falls under "ban evasion".
But yes, there's a lot of critical single maintainer projects.
That is not merely psychological unless you're very early in your career and life, with no dependents, etc.
Apple is not zero nagware. For that matter my iPhone nags me all the time about iCloud storage I don’t want.
I know a lot of people with excellent credit scores who are not in financial stress at all. A decade of using a credit card for groceries and such and paying it in full will do it and not make you a slave to anybody.
Zero references to Turbopack, maybe start there?
I regret not learning about this before, but apparently "sidereal" is from Latin, and not what I always assumed, i.e. "side real" as in "kinda not quite real, wtf?!" day.
That actually deserves a competition of its own. Just what can you accomplish with a 256 bytes prompt? Or maybe 32 bytes, to compensate for expressiveness of natural language.
Yeah, why trust your actual experience over numbers? Nothing surer than synthetic benchmarks
Is anyone that matters actually using jj?
>I also want Claude to work reliably but very few (no?) companies have ever seen this level of rapid growth.
You do understand however that aside from the growth/maturity path, this is also a path to enshittification and skinning their users, which might come even faster to LMMs than say Google , because the latter managed to have hundres of billions in investments in record time to recoup and IPOs on sight.
Or could sell it on eBay for an amount of money that's nontrivial from POV of a gig economy worker.
Those resins are absolutely fantastic but do read the MSDS and be very careful, it doesn't take much to get yourself in the emergency ward with that stuff. Another risk to be acutely aware of is that these reactions usually are exothermic and can go runaway faster than you can blink of the conditions are right.
Future Crew's "Second Reality" was my introduction to demos, back in the 486 PC days.
> then SLS would represent a ~17 year long program that cost at least 41 billion dollars that netted 5 mission launches
SLS will never be worth it. But I'd discount from that price tag the continuity benefits of keeping the Shuttle folks around, and aerospace engineers employed, across the chasm years of the 2010s.
> Do you have an example use case?
The one that comes to mind is HPC, where you avoid over allocation of the physical cores. If the process has the whole node for itself for a brief period, inefficient memory access might have a bigger impact than memory starvation.
IBM also has their RAID-like memory for mainframes that might be able to do something similar. This feels like software implemented RAID-1.
You couldn't be more wrong about that.
> it's not implausible to me that they soon also had some rudimentary understanding of e.g. coin flip frequencies
We can actually tell from their dice that they don’t.
I believe in the book Against the Gods the author described ancient dice being—mostly—uneven. (One exception, I believe, was ancient Egypt.) The thinking was a weird-looking dice looks the most intuitively random. It wasn’t until later, when the average gambler started statistically reasoning, that standardized dice became common.
These dice are highly non-standard. In their own way, their similarity to other cultures of antiquities’ senses of randomness is kind of beautiful.
Maybe they could just, I don't know, use Claude to research their bugs. /s
Building a nuclear weapon that can be carried by Iraq's missiles is relatively difficult, because miniaturizing nuclear weapons requires much more complex designs. It took the US and the USSR quite a few test explosions to achieve such a warhead.
Building a bulky nuclear weapon that fits in, say, a shipping container, is not hard if sufficient highly enriched uranium is available. That's Hiroshima level nuclear technology, the gun-type bomb.[1]
This is the difference between the "years away" and the "weeks away" estimates. Depends on whether the the delivery method is an ICBM or a shipping container.
The VBIOS is around 32-64k. The modesetting path is probably a few k.
And it depends on DOS configuring the memory space to leave an INT 20h call (to terminate the program) at a place that's easy to RET to.
This has always been the case, and actually inherited from CP/M.
disabling secure boot
...making it even more clear what "secure" boot actually secures: the control others have over your own computer.
Gibraltar's political situation is what it is because this was sorted out in the Treaty of Utrecht three hundred years ago, and Europe got very tired of leaders that thought they could redraw the map at the cost of millions of lives.
Probably the best we can expect from Iran is a frozen conflict like Korea or Cyprus, that stays frozen.
This is like saying we should have halted all RSA deployments until improvements in sieving stopped happening. The lattice contestants were all designed assuming BKZ would continually improve. It's not 1994 anymore, asymmetric cryptography is not a huge novelty to the industry, nobody is doing the equivalent of RSA-512.
> This is the counter argument
When the French helped us during the Revolutionary War, they didn't shore bombard the colonists' kids because it would have been bad and counterproductive.
Had a minor conniption until I saw the year. OpenAI just struggled to close a round. And the New Yorker just published an unflattering profile of Altman [1]. So it would make sense they'd go back to the PR strategy of "stop me from shooting grandma."
[1] https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...
I guess I didn't see the inequality focus in your first comment. At least, not beyond the qualitative assets as cash and sundries vs assets as financial assets. I pointed out that real wages are up in response to your claim about people being paid dollars. (The dollars we're paid are worth more. They're individually less. But the total take home is more. Hence real wage.) I think it's a non sequitur to then turn around and say well I was actually arguing about inequality from the start.
Data Collection and Estimation Methodology: https://electricity.heatmap.news/methodology
Perhaps stop taking the administration's claims at face value. Their army has not been destroyed. They continue to launch missiles daily and have been extraordinarily successful in targeting US/Israel radar and defensive assets throughout the region. They have suffered air force and naval losses, but if you look back at analysis from before the war started, exactly nobody considered the Iranian air force or navy to be of any strategic significance. Iran operates on a distributed military structure rather than a centralized command, so the assassination of senior political and military leaders is not the crippling blow the US expected it to be.
And really, that expectation is itself stupid. Suppose the US got involved in a hot conventional war with another superpower, and in the first week they killed the President, the vice President, a bunch of Representatives and Senators, and a bunch of senior figures at the Pentagon. Would the US just fold, or would it fill those positions via the line of succession, declare a national emergency, and fight back vigorously? You know the answer is #2, and the idea that other countries might do the same thing should not be a surprise. It appears the US administration has fallen into the trap of believing the shallowest version of its own propaganda about other countries, and assuming that Iran was just like Iraq under Saddam Hussein but with slightly different outfits.
The Iranian strategy is basically Mohammed Ali's Rope-a-dope: absorb punishment administered at exhausting cost (very expensive munitions with limited stocks) while spending relatively little of their own (dirt cheap drones with small payloads but effective targeting, continually degrading the aggressor's radar visibility and military infrastructure). The one limited ground incursion so far (ostensibly to rescue an airman, but almost certainly a cover for something else) resulted in the loss of multiple heavy transport aircraft, helicopters, and drones at a cost of hundred$ of million$.
Lifting of all US sanctions on Iran
I do not see that happening.
That example you gave is extremely memorable as I recognised it as exactly one of the insanely stupid false positives that a highly praised (and expensive) static analyser I ran on a codebase several years ago would emit copiously.
Nor should they be admissible as evidence in court.
I was hoping you'd comment here. Thank you. Amazing bits of lore.
It's not just the 10,000 hours, it's learning it very young.
I am an ex-professional ballet dancer, and one of the things I always find interesting is that any experienced ballet dancer can instantly tell who trained as a child and who didn't solely by how they stand (literally not even moving) at the barre. But the thing is, children with only a few years of training under their belt will often show this good form, while I have literally never seen someone who started as an adult, even dedicated adults who take class 4-5 times a week, get rid of that "I started as an adult" posture.
As an example, I was actually quite impressed at how Natalie Portman really managed to "look the part" in her role as a ballerina in Black Swan. Still, she wasn't fooling anyone with training - even with just a simple port de bras (raising of an arm), you could easily tell she wasn't a dancer.
This thread is not about Claude or LLMs.
You are the only one making fake arguments. The threat was explicitly to destroy 'a civilization', which nobody but yourself considers equivalent to 'infrastructure'. Ply your lame rhetorical fallacies elsewhere.
> No more cheap borrowing, no more low interest rates, hello constant high inflation.
Do you mean that we’ll have high inflation because we’ll keep running massive deficits? Because many countries that don’t have the reserve currency also have low inflation.
Iranian-Affiliated Cyber Actors Exploit Programmable Logic Controllers Across US Critical Infrastructure - https://www.cisa.gov/news-events/cybersecurity-advisories/aa... - April 7th, 2026
Seems like Trump agreed to give Iran control over the Strait of Hormuz:
A lot of people who are successful in it can take advantage of connections they have in the industry to make the first few sales.
For instance when David Duffield got kicked out of Peoplesoft he went out and started Workday to make a competitive product and of course his name was well known in the industry so the skids were greased for him.
See also this legendary story: https://www.marketingmonk.so/p/salesforce-grit-to-giant-mark...
Battery storage is now cheap enough to unleash India’s full solar potential - https://ember-energy.org/latest-insights/battery-storage-is-... - April 7th, 2026
https://ember-energy.org/app/uploads/2026/04/Battery-storage...
I was at a dance hall the other day, and this young lady came floating in. It's hard to describe how she walked - just like she was effortlessly gliding. It looks easy, but anyone else would look like a moose trying it.
It's the result of a lifetime of ballet dancing. Probably 10,000 hours, at least.
I was just in awe.
See https://amppublic.com and Stanford CS153, https://www.youtube.com/watch?v=mZqh7emiz9Q
Lattice cryptography was a contender alongside curves as a successor to RSA. It's not new. The specific lattice constructions we looked at during NIST PQC were new iterations on it, but so was Curve25519 when it was introduced. It's extremely not a rush job.
The elephant in the room in these conversations is Daniel Bernstein and the shade he has been casting on MLKEM for the last few years. The things I think you should remember about that particular elephant are (1) that he's cited SIDH as a reason to be suspicious of MLKEM, which indicates that he thinks you're an idiot, and (2) that he himself participated in the NIST PQC KEM contest with a lattice construction.
That movie is powerful and well worth watching.
We've been trying to get a Claude Code subscription for my company, the pricing page says $25 but they actually charge £25, 34% higher. I've been trying to talk to them for months, their support people don't even read what I'm saying and insist that it's somehow because of proration.
I'm fairly sure their billing backend is vibe-coded and their support is worse than Google's.