What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
Human brain, working quite alright.
Microsoft's management has always behaved as if it was a mistake to have added F# into Visual Studio 2010, and being stuck finding a purpose for it.
Note that most of its development is still by the open source community and its tooling is an outsider for Visual Studio, where everything else is shared between Visual Basic and C#.
With the official deprecation of VB, and C++/CLI, even though the community keeps going with F#, CLR has changed meaning to C# Language Runtime, for all practical purposes.
Also UWP never officially supported F#, although you could get it running with some hacks.
Similarly with ongoing Native AOT, there are some F# features that break under AOT and might never be rewritten.
A lost opportunity indeed.
I know it's not what people want to hear but my response to a lot of the comments here is just a general, I agree, it's time to stop using Windows.
They won't let you secure your drive the way you want. They won't let you secure your network the way you want (per the top-level comment about Wireguard). In so doing they are demonstrating not just that they can stop you from running these particular programs but that they are very likely going to exert this control on the entire product category going forward, and I see little reason to believe they will stop there. These are not minor issues; these are fundamental to the safety, security, and functionality of your machine. This indicates that Microsoft will continue to compromise the safety, security, and functionality of your machine going forward to their benefit as they see fit. This is intolerable for many, many use cases.
I think it is becoming clear that Microsoft no longer considers Windows users to be their customers any more. Despite the fact that people do in fact pay for Windows, Microsoft has shifted from largely supporting their customers to out-and-out exploiting their customers. (Granted a certain amount of exploitation has been around for a long time, but things like the best backwards compatibility in the industry showed their support, as well.)
I suspect this is the result of a lot of internal changes (not one big one) but I also see no particular reason at the moment to expect this to change. To my eyes both the first and second derivative is heading in the direction of more exploitation. More treating users like a cattle field and less like customers. When new features or work is being proposed at Microsoft, it is clear that it is being analyzed entirely in terms of how it can benefit Microsoft and users are not at the table.
No amount of wishing this wasn't so is going to change anything. No amount of complaining about how hard it is to get off of Windows is going to change anything; indeed at this point you're just signalling to Microsoft that they are correct and they can treat you this way and there's nothing you will do about it for a long time.
Our bike lanes are just a line on the sidewalk and pedestrians routinely walk on them, cross the sidewalk in them without looking, let their toddlers/pets run into them, etc. Also, nobody realizes that a bicycle bell means "someone is coming", so they just ignore it as background noise.
I had to mount an airhorn onto my bike. At least people listen to that, though it's so loud I only use it in emergencies.
This is my favorite HN comment of them all.
I think we know Gates had dinner with Epstein a few times, we don't know that he was involved with Epstein's wrongdoing or that he knew about it but of course we don't know that he wasn't or didn't. I think anybody involved with Epstein, however, demonstrated that they were bad judge of character and could have bad judgement in general.
I am not so offended by Bill Gates having affairs but I am offended with him having affairs with Microsoft employees.
What is funny about is that the common themes of "unpleasant environment" and "low trust" are obscured by left/right bickering. In fact the one thing they agree is that you can't trust other people and you need to spend your own or somebody else's money as a consequence.
Crime is a social determinant of health
https://pmc.ncbi.nlm.nih.gov/articles/PMC9933800/
but you won't see it in an article like that. pro-crime policies are racist because people of color are disproportionately affected by crime.
OK then, what is the opposite of this, the adhoc union?
"Let me see if the secrets are specified. echo $SECRETS"
You're not actually allowed to avoid this by having multiple accounts, that falls under "ban evasion".
But yes, there's a lot of critical single maintainer projects.
That is not merely psychological unless you're very early in your career and life, with no dependents, etc.
Mario Zechner wrote Pi, an agent framework, and wrote Pi (yes the names are confusing), a coding harness (like Claude Code) on top of it. OpenClaw uses Pi the framework, so now Mario Zechner is joining Armin Ronacher's company.
Apple is not zero nagware. For that matter my iPhone nags me all the time about iCloud storage I don’t want.
Yeah WLED does it fine, I've built a few and it works well.
I know a lot of people with excellent credit scores who are not in financial stress at all. A decade of using a credit card for groceries and such and paying it in full will do it and not make you a slave to anybody.
Zero references to Turbopack, maybe start there?
I regret not learning about this before, but apparently "sidereal" is from Latin, and not what I always assumed, i.e. "side real" as in "kinda not quite real, wtf?!" day.
That actually deserves a competition of its own. Just what can you accomplish with a 256 bytes prompt? Or maybe 32 bytes, to compensate for expressiveness of natural language.
Yeah, why trust your actual experience over numbers? Nothing surer than synthetic benchmarks
Is anyone that matters actually using jj?
>I also want Claude to work reliably but very few (no?) companies have ever seen this level of rapid growth.
You do understand however that aside from the growth/maturity path, this is also a path to enshittification and skinning their users, which might come even faster to LMMs than say Google , because the latter managed to have hundres of billions in investments in record time to recoup and IPOs on sight.
Or could sell it on eBay for an amount of money that's nontrivial from POV of a gig economy worker.
Those resins are absolutely fantastic but do read the MSDS and be very careful, it doesn't take much to get yourself in the emergency ward with that stuff. Another risk to be acutely aware of is that these reactions usually are exothermic and can go runaway faster than you can blink of the conditions are right.
Future Crew's "Second Reality" was my introduction to demos, back in the 486 PC days.
> then SLS would represent a ~17 year long program that cost at least 41 billion dollars that netted 5 mission launches
SLS will never be worth it. But I'd discount from that price tag the continuity benefits of keeping the Shuttle folks around, and aerospace engineers employed, across the chasm years of the 2010s.
> Do you have an example use case?
The one that comes to mind is HPC, where you avoid over allocation of the physical cores. If the process has the whole node for itself for a brief period, inefficient memory access might have a bigger impact than memory starvation.
IBM also has their RAID-like memory for mainframes that might be able to do something similar. This feels like software implemented RAID-1.
You couldn't be more wrong about that.
> it's not implausible to me that they soon also had some rudimentary understanding of e.g. coin flip frequencies
We can actually tell from their dice that they don’t.
I believe in the book Against the Gods the author described ancient dice being—mostly—uneven. (One exception, I believe, was ancient Egypt.) The thinking was a weird-looking dice looks the most intuitively random. It wasn’t until later, when the average gambler started statistically reasoning, that standardized dice became common.
These dice are highly non-standard. In their own way, their similarity to other cultures of antiquities’ senses of randomness is kind of beautiful.
Maybe they could just, I don't know, use Claude to research their bugs. /s
Building a nuclear weapon that can be carried by Iraq's missiles is relatively difficult, because miniaturizing nuclear weapons requires much more complex designs. It took the US and the USSR quite a few test explosions to achieve such a warhead.
Building a bulky nuclear weapon that fits in, say, a shipping container, is not hard if sufficient highly enriched uranium is available. That's Hiroshima level nuclear technology, the gun-type bomb.[1]
This is the difference between the "years away" and the "weeks away" estimates. Depends on whether the the delivery method is an ICBM or a shipping container.
The VBIOS is around 32-64k. The modesetting path is probably a few k.
And it depends on DOS configuring the memory space to leave an INT 20h call (to terminate the program) at a place that's easy to RET to.
This has always been the case, and actually inherited from CP/M.
disabling secure boot
...making it even more clear what "secure" boot actually secures: the control others have over your own computer.
Gibraltar's political situation is what it is because this was sorted out in the Treaty of Utrecht three hundred years ago, and Europe got very tired of leaders that thought they could redraw the map at the cost of millions of lives.
Probably the best we can expect from Iran is a frozen conflict like Korea or Cyprus, that stays frozen.
This is like saying we should have halted all RSA deployments until improvements in sieving stopped happening. The lattice contestants were all designed assuming BKZ would continually improve. It's not 1994 anymore, asymmetric cryptography is not a huge novelty to the industry, nobody is doing the equivalent of RSA-512.
> This is the counter argument
When the French helped us during the Revolutionary War, they didn't shore bombard the colonists' kids because it would have been bad and counterproductive.
Had a minor conniption until I saw the year. OpenAI just struggled to close a round. And the New Yorker just published an unflattering profile of Altman [1]. So it would make sense they'd go back to the PR strategy of "stop me from shooting grandma."
[1] https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...
I guess I didn't see the inequality focus in your first comment. At least, not beyond the qualitative assets as cash and sundries vs assets as financial assets. I pointed out that real wages are up in response to your claim about people being paid dollars. (The dollars we're paid are worth more. They're individually less. But the total take home is more. Hence real wage.) I think it's a non sequitur to then turn around and say well I was actually arguing about inequality from the start.
Data Collection and Estimation Methodology: https://electricity.heatmap.news/methodology
Perhaps stop taking the administration's claims at face value. Their army has not been destroyed. They continue to launch missiles daily and have been extraordinarily successful in targeting US/Israel radar and defensive assets throughout the region. They have suffered air force and naval losses, but if you look back at analysis from before the war started, exactly nobody considered the Iranian air force or navy to be of any strategic significance. Iran operates on a distributed military structure rather than a centralized command, so the assassination of senior political and military leaders is not the crippling blow the US expected it to be.
And really, that expectation is itself stupid. Suppose the US got involved in a hot conventional war with another superpower, and in the first week they killed the President, the vice President, a bunch of Representatives and Senators, and a bunch of senior figures at the Pentagon. Would the US just fold, or would it fill those positions via the line of succession, declare a national emergency, and fight back vigorously? You know the answer is #2, and the idea that other countries might do the same thing should not be a surprise. It appears the US administration has fallen into the trap of believing the shallowest version of its own propaganda about other countries, and assuming that Iran was just like Iraq under Saddam Hussein but with slightly different outfits.
The Iranian strategy is basically Mohammed Ali's Rope-a-dope: absorb punishment administered at exhausting cost (very expensive munitions with limited stocks) while spending relatively little of their own (dirt cheap drones with small payloads but effective targeting, continually degrading the aggressor's radar visibility and military infrastructure). The one limited ground incursion so far (ostensibly to rescue an airman, but almost certainly a cover for something else) resulted in the loss of multiple heavy transport aircraft, helicopters, and drones at a cost of hundred$ of million$.
Lifting of all US sanctions on Iran
I do not see that happening.
That example you gave is extremely memorable as I recognised it as exactly one of the insanely stupid false positives that a highly praised (and expensive) static analyser I ran on a codebase several years ago would emit copiously.
Nor should they be admissible as evidence in court.
I was hoping you'd comment here. Thank you. Amazing bits of lore.
It's not just the 10,000 hours, it's learning it very young.
I am an ex-professional ballet dancer, and one of the things I always find interesting is that any experienced ballet dancer can instantly tell who trained as a child and who didn't solely by how they stand (literally not even moving) at the barre. But the thing is, children with only a few years of training under their belt will often show this good form, while I have literally never seen someone who started as an adult, even dedicated adults who take class 4-5 times a week, get rid of that "I started as an adult" posture.
As an example, I was actually quite impressed at how Natalie Portman really managed to "look the part" in her role as a ballerina in Black Swan. Still, she wasn't fooling anyone with training - even with just a simple port de bras (raising of an arm), you could easily tell she wasn't a dancer.
This thread is not about Claude or LLMs.
Israel seems likely to do anything they can to start things up again.
You are the only one making fake arguments. The threat was explicitly to destroy 'a civilization', which nobody but yourself considers equivalent to 'infrastructure'. Ply your lame rhetorical fallacies elsewhere.
> No more cheap borrowing, no more low interest rates, hello constant high inflation.
Do you mean that we’ll have high inflation because we’ll keep running massive deficits? Because many countries that don’t have the reserve currency also have low inflation.
Iranian-Affiliated Cyber Actors Exploit Programmable Logic Controllers Across US Critical Infrastructure - https://www.cisa.gov/news-events/cybersecurity-advisories/aa... - April 7th, 2026
Seems like Trump agreed to give Iran control over the Strait of Hormuz:
Battery storage is now cheap enough to unleash India’s full solar potential - https://ember-energy.org/latest-insights/battery-storage-is-... - April 7th, 2026
https://ember-energy.org/app/uploads/2026/04/Battery-storage...
I was at a dance hall the other day, and this young lady came floating in. It's hard to describe how she walked - just like she was effortlessly gliding. It looks easy, but anyone else would look like a moose trying it.
It's the result of a lifetime of ballet dancing. Probably 10,000 hours, at least.
I was just in awe.
I love it. I also really really like the brutalist/derelict aesthetic, and I think this nails it. Well done.
See https://amppublic.com and Stanford CS153, https://www.youtube.com/watch?v=mZqh7emiz9Q
Lattice cryptography was a contender alongside curves as a successor to RSA. It's not new. The specific lattice constructions we looked at during NIST PQC were new iterations on it, but so was Curve25519 when it was introduced. It's extremely not a rush job.
The elephant in the room in these conversations is Daniel Bernstein and the shade he has been casting on MLKEM for the last few years. The things I think you should remember about that particular elephant are (1) that he's cited SIDH as a reason to be suspicious of MLKEM, which indicates that he thinks you're an idiot, and (2) that he himself participated in the NIST PQC KEM contest with a lattice construction.
That movie is powerful and well worth watching.
I LOVED the physical magazine, read every word, from the time I started subscribing as a college freshman in 1966 until they started to pile up unread around 2015. Nearly 50 years seems like a good run.
Not only did this one draw me an excellent pelican... it also animated it! https://simonwillison.net/2026/Apr/7/glm-51/
I buy the rationale for this. There's been a notable uptick over the past couple of weeks of credible security experts unrelated to Anthropic calling the alarm on the recent influx of actually valuable AI-assisted vulnerability reports.
From Willy Tarreau, lead developer of HA Proxy: https://lwn.net/Articles/1065620/
> On the kernel security list we've seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.
> And we're now seeing on a daily basis something that never happened before: duplicate reports, or the same bug found by two different people using (possibly slightly) different tools.
From Daniel Stenberg of curl: https://mastodon.social/@bagder/116336957584445742
> The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.
> I'm spending hours per day on this now. It's intense.
From Greg Kroah-Hartman, Linux kernel maintainer: https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_...
> Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us.
> Something happened a month ago, and the world switched. Now we have real reports. All open source projects have real reports that are made with AI, but they're good, and they're real.
Shared some more notes on my blog here: https://simonwillison.net/2026/Apr/7/project-glasswing/
> most metro (including NYC) recycling is effectively a scam. How do you mandate composting in NYC
Also a scam.
Submitter here, I posted because I think there is a confluence of interesting macroeconomic factors at work here. Certainly, immigration policy is factoring into this, as the restaurant industry is highly dependent on such labor. At the same time, we're still seeing 55+ leave the labor force rapidly (~4M Boomers continue to retire per year, ~330k/month), leaving only younger prime working age cohort, which continues to shrink. At the same time, we're seeing youth unemployment around 6%, including those with a college degree [1].
As the piece mentions, young men are "staying on the sidelines" versus engage in low wage, low status restaurant work (in this context). So, who can hold out longer: businesses and industries historically underpaying workers but "desperate for them"? Or potential workers? Because as long as US immigration is constrained, as a business, you get to pick from who is on the soil at whatever that market clearing price for labor is.
[1] Young men are struggling in a slowing job market, even if they have college degrees - https://www.nbcnews.com/business/economy/young-men-strugglin... - August 13th, 2025 ("Men ages 23 to 30 are discovering that a bachelor's degree doesn't offer the same protection from unemployment that it used to")
Great article. You really start to appreciate floating point when you have to squeeze some arbitrary level of performance out of an underpowered (say embedded) CPU and you decide to use fixed point. Suddenly all those nasty little edge cases that the floating point library would have handled silently, reliably and hopefully correctly for you need to be dealt with.
Just keeping track of the shifts during a chain of multiplications and additions can really ruin your day. And the good code will look exactly the same as the bad code. I'm doing something like that right now and have moved from doubles to fixed point 64 bit ints (32.32), it works, but it took me much longer than I thought it would (phase angle estimator for SDR output).
I don't think this is broadly true and to the extent it's true for cryptographic software, it's only relatively recently become true; in the 2000s and 2010s, if I was tasked with assessing software that "encrypted a file" (or more likely some kind of "message"), my bet would be on finding a game-over flaw in that.
Sure, but "the same results" will rapidly become unacceptable results if much better results are available.
Non proliferation is over and done with. Every country that can afford it and has the capability will have the bomb. There is no way this genie will go back in the bottle. And that means that a future nuclear war - if not today - is all but a certainty.
Europe has one 'weapon' they can use but they can use it just once: dump the bonds. The problem with that is that you need a reason big enough that by the time you're going to do it it will likely be too late to be effective and as a punitive measure it makes little sense, it's just one step short of a declaration of war. If Trump goes ahead with this madness then tomorrow morning the world economy will be in shambles, no matter what. Note that we got here ostensibly to 'free the Iranian people', apparently they need to be murdered to make them free. It's grotesque.
My local tennis court's reservation website was broken and I couldn't cancel a reservation, and I asked GLM-5.1 if it can figure out the API. Five minutes later, I check and it had found a /cancel.php URL that accepted an ID but the ID wasn't exposed anywhere, so it found and was exploiting a blind SQL injection vulnerability to find my reservation ID.
Overeager, but I was really really impressed.
> Contrast this to the recent trend of dropping in-line references to project names
Can you give an example? A reference to a project, without a link directly to the project, doesn't meet general definitions of spam.
Response from Iran: "Iran’s Islamic Revolutionary Guard Corps (IRGC) says it will respond outside the region and deprive the United States and its allies of oil and gas “for many years” if the US crosses “red lines” and attacks civilian facilities."[1]
"Iran has closed all diplomatic and indirect channels of communication with the United States, the state-run Tehran Times reported ..." US media does not seem to have picked up on this, but media in India and China have.[2] But the common source seems to be "Tehran Times", and it's unclear who runs that or where they get their info. New York Times, AP, and AlJazeera are not saying that. Xinhua has a one-line note with no source. The US White House says Vance is talking to somebody. Politico says Vance is on "standby".[3]
A negotiated cease-fire seems unlikely now.
[1] https://www.aljazeera.com/news/liveblog/2026/4/7/iran-war-li...
[2] https://www.the-independent.com/bulletin/news/iran-mediators...
[3] https://www.politico.com/news/2026/04/06/vance-is-on-standby...
The focus on the speed of the agent generated code as a measure of model quality is unusual and interesting. I've been focusing on intentionally benchmaxxing agentic projects (e.g. "create benchmarks, get a baseline, then make the benchmarks 1.4x faster or better without cheating the benchmarks or causing any regression in output quality") and Opus 4.6 does it very well: in Rust, it can find enough low-level optimizations to make already-fast Rust code up to 6x faster while still passing all tests.
It's a fun way to quantify the real-world performance between models that's more practical and actionable.
It is not quoted, it is summarized. You are quoting the first sentence of the post, but the certainty implied in that sentence is immediately undercut by the next; “I don’t want it to happen but it probably will”, and made even more muddy by the rest, which continues: “However, now that we have Complete and Total Regime Change, where different, smarter, and less radicalized minds prevail, maybe something revolutionarily wonderful can happen, WHO KNOWS? We will find out tonight, one of the most important moments in the long and complex history of the World. 47 years of extortion, corruption, and death, will finally end. God Bless the Great People of Iran!”
I see two basic cases for the people who are claiming it is useless at this point.
One is that they tried AI-based coding a year or two ago, came to the IMHO completely correct at that time conclusion that it was nearly useless, and have not tried it since then to see that the situation has changed. To which the solution is, try it again. It changed a lot.
The other are those who have incorporated into their personal identity that they hate AI and will never use it. I have seen people do things like fire AI at a task they have good reasons to believe it will fail at, and when it does, project that out to all tasks without letting themselves consciously realize that picking a bad task on purpose skews the deck.
To those people my solution is to encourage them to hold on to their skepticism. I try to hold on to it as well despite the incredible cognitive temptation not to. It is very useful. But at the same time... yeah, there was a step change in the past year or so. It has gotten a lot more useful...
... but a lot of that utility is in ways that don't obviate skilled senior coding skills. It likes to write scripting code without strong types. Since the last time I wrote that, I have in fact used it in a situation where there were enough strong types that it spontaneously originated some, but it still tends to write scripting code out of that context no matter what language it is working in. It is good at very straight-line solutions to code but I rarely see it suggest using databases, or event sourcing, or a message bus, or any of a lot of other things... it has a lot of Not Invented Here syndrome where it instead bashes out some minimal solution that passes the unit tests with flying colors but can't be deployed at scale. No matter how much documentation a project has it often ends up duplicating code just because the context window is only so large and it doesn't necessarily know where the duplicated code might be. There's all sorts of ways it still needs help to produce good output.
I also wonder how many people are failing to prompt it enough. Some of my prompts are basically "take this and do that and write a function to log the error", but a lot of my prompts are a screen or two of relevant context of the project, what it is we are trying to do, why the obvious solution doesn't work, here's some other code to look at, here's the relevant bugs and some Wiki documentation on the planning of the project, we should use {event sourcing/immutable trees/stored procedures/whatever}, interact with me for questions before starting anything. This is not a complete explanation of what they are doing anymore, but there's still a lot of ways in which what an LLM can really do is style transfer... it is just taking "take this and do that and write a function to log the error" and style-transforming that into source code. If you want it to do something interesting it really helps to give it enough information in the first place for the "style transfer" to get a hold of and do something with. Don't feel silly "explaining it to a computer", you're giving the function enough data to operate on.
Mostly.
Remember the orange trees took the CO2 out of the atmosphere to make the peels. Some of it, probably most of it, is going back into the atmosphere but some of it is going to become soil carbon which could be retained for decades
https://en.wikipedia.org/wiki/Soil_carbon
Soil carbon is like dark matter in that there is a lot of it and it is poorly understood.
It really says something about the current state of affairs that after reading the headline, my first thought was oh god no, the photos are probably all hallucinated...
But it's actually really cool how they used AI to better determine the locations of the photos. I love this!
"Stop paying for"?
But you have to pay for your own S3 bucket as well... and it's generally several times more expensive per terabyte, though this depends on different factors. (Not to mention you might still have to pay for Google if your e.g. Gmail doesn't fit into the free tier anymore.)
If this is supposed to be financially motivated, the creator seems to have it somewhat backwards.
That we accept the lying doesn't mean it's good.
Translucent concrete: https://www.allplan.com/blog/translucent-concrete/
Which llm is best at driving DuckDB currently?
I think H-1Bs have always been good for big companies like IBM, Google and Infosys and bad for startups. I mean, at a certain size, many of your hires are key employees and winning the lottery is not a business plan. To big co's though workers are fungible and if only 20% of the people win you can still open an office in Bangalore or Shanghai and tap foreign talent that way.
I am no fan of the H-1B program and think we should do something else like give out more green cards. I am happy to 'compete' with them if they are getting paid market rates like me, in fact working together with talented people puts my skills on wheels. I have known so many H-1Bs who were treated terribly, like the kind of situations that make HR staff quit.
Cloudflare has long been doing work on PQ (sometimes in conjunction with Google) and rolled out PQ encryption for our customers. You can read about where this all started for us 7 years back: https://blog.cloudflare.com/towards-post-quantum-cryptograph... and four years ago rolled out PQ encryption for all customers: https://blog.cloudflare.com/post-quantum-for-all/
The big change here is that we're going to roll out PQ authentication as well.
One important decision was to make this "included at no extra cost" with every plan. The last thing the Internet needs is blood-sucking parasites charging extra for this.
4 day week advocates for 32 hour week at full pay, which can be afforded by observing aggregate profits in economic systems.
https://www.4dayweek.com/ (global) | https://workfour.org/ (US)
https://www.theguardian.com/commentisfree/2024/nov/21/icelan...
https://news.ycombinator.com/item?id=39992783 (citations)
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Sounds like Fiberhood is adjacent to https://solarunitedneighbors.org/ ?
I keep multiple copies across systems, as is tradition.
(25+ years in tech, ymmv)
Are there any efforts to fix this?
"Also, it seems like all the Copilot 'connected experiences' are really just a chat window without any real integration with the applications they are embedded in."
I was triple-booked today. Two of the meetings in question should have had significant overlap between attendees. I figured, hey, there's this Copilot thing here, I'll ask it what the overlap is, that's the sort of thing an AI should be able to do. It comes back and reports that there is one person in both meetings, and that "one person" isn't even me. That doesn't seem right. One of the autocompleted suggestions for the next thing to ask is "show me the entire list of attendees" so I'm like, sure, do that.
It turns out that the API Copilot has access to can only access the first ten attendees of the meetings. Both meetings were much larger than that.
Insert rant here about hobbling 2026 servers with random "plucked out of my bum" limits on processing based on the capabilities of roughly 2000-era servers for the sheer silliness of a default 10-attendee limit being imposed on any API into Outlook.
But also in general what a complete waste of hooking up an amazingly sophisticated AI model to such an impoverished view of the world.
Thanks for this, the anecdote with the lost data was very concerning to me.
I think you're exactly right about the WAL shared memory not crossing the container boundary. EDIT: It looks like WAL works fine across Docker boundaries, see https://news.ycombinator.com/item?id=47637353#47677163
I don't know much about Kamal but I'd look into ways of "pausing" traffic during a deploy - the trick where a proxy pretends that a request is taking another second to finish when it's actually held in the proxy while the two containers switch over.
From https://kamal-deploy.org/docs/upgrading/proxy-changes/ it looks like Kamal 2's new proxy doesn't have this yet, they list "Pausing requests" as "coming soon".
I would personally pay money not to have this thing.
It's wonderful and I love that someone else loves it. The care put into it is fantastic. Vive la différence.
(https://en.wiktionary.org/wiki/vive_la_diff%C3%A9rence for those who may not recognize that phrase.)
pAIgliacci: as a large language model, I am unable to experience live comedy.
In fairness, you can also have that experience with Microsoft OneDrive.
> Would We Choose SQLite Again? Yes. For a single-server deployment with moderate write volume, SQLite eliminates an entire category of infrastructure complexity. No connection pool tuning. No database server upgrades. No replication lag.
These are weird reasons. You can just install Postgres or MySQL locally too. Connection pool tuning certainly isn't anything you have to worry about for a moderate write volume. You don't ever need to upgrade the database if you don't want to, since you're not publicly exposing it. There's obviously no replication lag if you're not replicating, which you wouldn't be with a single server.
The reason you don't usually choose SQLite for the web is future-proofing. If you're totally sure you'll always stay single-server forever, then sure, go for it. But if there's even a tiny chance you'll ever need to expand to multiple web servers, then you'll wish you'd chosen a client-server database from the start. And again, you can run Postgres/MySQL locally, on even the tiniest cheapest VPS, basically just as easily as using SQLite.
Yeah, we're (UK) only just at the "occasionally cheap 100% renewables" state, and it's maddeningly slow progress. But it seems like a lot of things are suddenly coming online, like battery storage, and the Scotland-England grid upgrade will happen in the next few years. https://eandt.theiet.org/2026/04/02/ps12bn-plan-upgrade-scot...
Crime's been descending from the COVID blip for a while, everywhere, Flock or otherwise. My city saw zero murders in Q1; 2021 saw ~15 by now.
In other words: https://www.youtube.com/watch?v=xSVqLHghLpw