What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
I use the term "30,000 foot view" a lot: https://nanoglobals.com/glossary/30000-foot-view/
It appeals to me because if you've ever taken a flight you can see how the details get progressively erased as you lift. Details that matter for a lot of reasons even if you can't see them.
Hmmm. Don't they have a reporting form or something like that? Down with those filthy Azure pirates on IP 52.166.113.188.
Yes, but in the opposite way to what you think. Do the math, there's billions of people consuming the overly cheap, massively subsidized goods and services parent listed; there's only so many billionaires and they have only so many billions, and most of it is just fake bullshit accounting paper-shuffling anyway.
It's because the consequences of AI is so direct and obvious, and also faster, where the inequality and job losses from other tech advances are just less direct.
That is, it's not hard to see why so many main streets in smaller towns have boarded up retail stores since you can now get anything in about a day (max) from Amazon. But Amazon (and other Internet giants) always played at least semi-plausible lip service that they were a boon to small fry (see Amazon's FBA commercials, for example). But you've got folks like Altman and Amodei gleefully saying how AI will be able to do all the work of a huge portion of (mostly high paying) jobs.
So it's not surprising that people are more up in arms about AI. And frankly, I don't think it really matters. Anger against "the tech elite" has been bubbling up for a long time now, and AI now just provides the most obvious target.
The author may have identified that "the idioms come from the use of system frameworks", but they absolutely got wrong just about everything about why apps are not consistent on the web (e.g. I was baffled by their reasons listed under "this lack of homogeneity is for two reasons" section).
First, what he calls "the desktop era" wasn't so much a desktop era as a Windows era - Windows ran the vast majority of desktops (and furthermore, there were plenty of inconsistencies between Windows and Mac). So, as you point out regarding the Win32 API, developers had essentially one way to do things, or at least the far easiest way to do things. Developers weren't so much "following design idioms" as "doing what is easy to do on Windows".
The web started out as a document sharing system, and it only gradually and organically turned over to an app system. There was simply no single default, "easiest" way to do things (and despite that, I remember when it seemed like the web converged all at once onto Bootstrap, because it became the easiest and most "standard" way to do things).
In other words, I totally agree with you. You can have all the "standard idioms" that you want, but unless you have a single company providing and writing easy to use, default frameworks, you'll always have lots of different ways of doing things.
> You cannot offer a taxi service in a car that is not fit for the road, and then just shrug when it crashes a people get hurt.
The problem is that there's no overt way to tell whether the "car" (code) you're looking at is someone's experimental go-kart made by lashing a motor to a few boards, or a well tested and security analyzed commercial product, without explicitly doing those processes on your own.
The problem is all the go-kart hobbyists who make moderately popular go-kart designs end up being asked for all sorts of commercial territory requirements.
The people on the consuming end think "reliability is their job!" and try to force all their requirements and obligations onto the go-kart makers, which usually doesn't end well.
You can steer it towards reusable components, though.
Find a run you like, and build off that.
Another thread with a bit more life to it: https://news.ycombinator.com/item?id=47739524
And have either a small population or a very low per-person energy budget.
But: 7 isn't the number that matters, what matters is that next year it will be 8 or 9. That would be worth documenting.
The interesting bit to me is that everybody just plays along. Rather than that he gets thrown off his perch and replaced by someone sane. Trump is an idiot, no doubt about it. But all those that voted for him and that continue to enable him are the bigger idiots, and if they're not idiots they are probably hoping to profit from the chaos he's creating.
That's because chaos is a good time to do some grabbing. You could see this during the downfall of the former Soviet Union, which in spite of being dirt poor still made a handful of families obscenely wealthy. Now imagine being able to do a similar resource grab on the scale of the modern United States.
Trump 1, COVID, Trump 2, the Russian war on Ukraine, AI and a couple of more wars... It's a miracle things haven't gone further down - yet. But I'm really wondering how long our social constructs will be able to withstand this kind of concerted assault.
Have you ever seen how many GCC has for plain old C?
Ads do not pay enough to cover AI usage. People see the big numbers Google and Facebook make in ads and forget to divide the number by the number of people they serve ads to, let alone the number of ads they served to get to that per-user number. You can't pay for 3 cents of inference with .07 cents of revenue.
You also can't put ads in code completion AIs because the instant you do the utility to me of them at work drops to negative. Guess how much money companies are going to pay for negative-value AIs? Let's just say it won't exactly pay for the AI bubble. A code agent AI puts an ad for, well, anything and the AI accidentally puts it into code that gets served out to a customer and someone's going to sue. The merits of the case won't matter, nor the fact the customer "should have caught it in review", the lawsuit and public reputation hit (how many people here are reading this and salivating at the thought of being able to post an angrygram about AIs being nothing but ad machines?) still cost way too much for the AI companies creating the agents to risk.
Given enough people enough guns and school shootings are inevitable.
Allow a handful of people that grab the economy and all means of production and violence will be the result.
At this point in time it is simply cause and effect, the surprising thing to me is how long it is holding together. But at the rate the economy is being wrecked I fail to see how it will do so for much longer.
Effectively the French elites started the French revolution by being a little bit more greedy than the population would have tolerated. That set off an avalanche of what were effectively a series of mini revolutions ultimately resulting in modern France, which is in many ways unlike any other country in the world. The United States had its war of independence (aided by France, by the way), and then its civil war. But it never had a class war - yet - and this article presages that class war.
It could well be that the small number of rich people that are currently effectively a government outside of the government genuinely believe that their wealth and power insulate them from the consequences of pushing their greed and wealthy to ridiculous levels. But I suspect the author is right in that this is approaching some kind of threshold and I have no way of seeing across the divide, I'm hoping for another France rather than another Somalia.
This is why technology businesses and professionals need to take a little bit of an active role in local politics. Otherwise you get nonsense.
I just use Stripe, hopefully that works, I'll check. Thank you.
> But the next near death experience came in the 2010s with the years-delayed move from 14nm to 10nm processes. This came kind of a joke in the industry and it's honestly still not clear to me what went wrong. Intel had previously migrated to smaller and smaller processes like clockwork.
I'd love to see a post-mortem. Intel's 14nm came out with Broadwell in 2014, and it was their desktop/server node for seven years, until Alder Lake in 2021. But they got stuck on the micro-arch as well. They shipped what was essentially the same Skylake core for five years from 2015 to 2020.
It was marketing that was installed on the statute of liberty in 1903, when the U.S. was already fully developed. It doesn’t reflect the original intent at all.
It's crazy, a few weeks ago the limits would comfortably last me all week. This week, I've used up half the limit in a day.
How much actual money do you think the “people with billions of dollars” have in comparison to the needs of the population as a whole? I think you’re very confused about where the actual income in the economy goes.
Who says it sucks at front end? Unlike Stackoverflow, AI does a great job of "center a div." I tend to like working from reference documentation which is great for Python and Java but challenging for CSS where you have to navigate roughly 50 documents that relate to each other in complex ways to find answers.
Like I don't give it 100% responsibility for front end tasks but I feel like working together with AI I feel like I am really in control of CSS in a way I haven't been before. If I am using something like MUI it also tends to do really good at answering questions and making layouts.
Thing is, I don't treat AI as an army of 20 slaves will get "shit" done while I sleep but rather as a coding buddy. I very much anthropomorphize it with lots of "thank you" and "that's great!" and "does this make sense?", "do you have any questions for me?" and "how would you go about that?" and if makes me a prototype of something I will ask pointed questions about how it works, ask it to change things, change the code manually a bit to make it my own, and frequently open up a library like MUI in another IDE window and ask Junie "how do i?" and "how does it work when I set prop B?"
It doesn't 10x my speed and I think the main dividend from using it for me is quality, not compressed schedule, because I will use the speed to do more experiments and get to the bottom of things. Another benefit is that it helps me manage my emotional energy, like in the morning it might be hard for me to get started and a few low-effort spikes are great to warm me up.
OK sure, AI is terrible, but when has humanity ever said "yeah OK fine, we'll put this particular genie back in the bottle"?
The question is "what do we do now?".
It feels like I'm getting less and less for my money every day. A few weeks ago I was programming all week and never getting close to the limit, yesterday half my weekly limit went away in a day. Changing the limits mid-subscription is just theft.
RAVED is more likely. These things aren't cheap.
I haven't tried MiniMax but Claude has gotten seriously nerfed lately. A few weeks ago I could code all week on the $100/mo plan without getting close to the limit, now I consumed half the limit in the first day.
Ridiculous, my company has committed to $200k annual plans and they changed the deal mid-way. We'll have to see about a refund.
Indeed. I always keep it installed on my devices, as it turns the phone into a poor man's tricorder, and that's handy sometimes.
Most recently I used it to check light levels at home in different rooms, to determine where we need to boost or replace LED strips. Sure, there's million Lux meter apps, but Phyphox is better than all of them and demonstrates why these things shouldn't be dedicated apps in the first place. In the past I also made use of EM and vibration frequency displays to troubleshoot hardware.
A complement to that is https://play.google.com/store/apps/details?id=org.intoorbit.... which, once upon a time, helped me track down a source of rage-inducing, late-night high-frequency beeping that was driving us insane - down to specific apartment in a block on the other side of the street. I ended up friends with those neighbors, after teaching them how to disable the alarm clock on their Bluetooth radio when they go away for a weekend.
Absolutely not "open source" - here's the license: https://huggingface.co/MiniMaxAI/MiniMax-M2.7/blob/main/LICE...
> Non-commercial use permitted based on MIT-style terms; commercial use requires prior written authorization.
And calling the non-commercial usage "MIT-style terms" is a stretch - they come with a bunch of extra restrictions about prohibited uses.
It's open weights, not open source.
It is insignificant if you're doing 100k queries per day, and you gain a lot for your 3 extra seconds a day.
Jacob Kaplan-Moss, February 2024: https://social.jacobian.org/@jacob/111914179201102152
> “We believe that open source should be sustainable and open source maintainers should get paid!”
> Maintainer: introduces commercial features “Not like that”
> Maintainer: works for a large tech co “Not like that”
> Maintainer: takes investment “Not like that”
The LLMs read everything.
I'd go further and say this is also the cybernetics equivalent of the religious teachings about humans, specifically the whole "judge by one's deeds, not by one's words" thing. So it's not like it's a novel idea.
Also worth remembering that most systems POSIWID is said about, and in fact ~all important systems affecting people, are not designed in the first place. Market forces, social, political, even organizational dynamics, are not designed top-down, they're emergent, and bottom-up wishes and intentions do not necessarily carry over to the system at large.
after Apple removed a character from its Czech keyboard
I wonder what the thought process (or perhaps lack thereof) at Apple was. Did no one of the likely-somewhat-large team who did that think "wait, this could lock out our users who may have used that character"?
In the immortal words of Linus Torvalds: "WE DO NOT BREAK USERSPACE!"
Now one of the ways in might be those companies who claim to be able to break iPhone security for law enforcement and the like, but I'm not sure if they'd be willing to do it (at any price) unless you could somehow trick them into thinking you had some "interesting" data on there...
The modding community lives on where it always were, hacking gaming and demoscene, and that never wasn't on Apple platforms for the most part, rather Amiga, PC, Atari,...
Neither side can have it both ways, but there's way too much whining about people not paying.
Want people to pay for your tools? Don't offer them for free.
This is related to my usual point here, that if one offers something for free under a GPL or MIT license, claiming to do so for the betterment of humanity, only to later retract it because corporations profit without paying or AI companies use it for model training, that person is an entitled liar who released proprietary software while using openness and generosity as a marketing strategy.
Proprietary software is fine. Lying about it and using good ideals as marketing strategy is not. That applies as much to "released as MIT so it be useful to many, then unreleased because author realized it might end up in training data of some LLMs (and in so doing, actually become useful to many people)" software, as it does to blogs and all the whining about AI denying them credit (and pre-AI, search engines, except then the developer community was on side of search and not free-but-with-ads/credit publishers).
> Every, single time, someone posts a cool paid project, there is the usual comment why pay, look at MIT/BSD/Apache/... project so and so.
That comes from some combination of the project looking not worth a cent, probably not working (at least not for the use case intended), payments being a big step starting a real multi-party relationship, much distinct from just looking at a webpage or playing with code locally, and the poster being a student or younger.
I too strongly favored MIT over everything when I was a kid. Didn't have money to pay for anything, and GPL was complicated and my slightly older colleagues (with probably more business sense than I) didn't like it.
"A whole civilization will die tonight, never to be brought back again."
(It didn't, it was just an empty threat, but calls to political violence are the currency of the day)
The ultimate entitlement is refusing to pay for tooling, while expecting to be getting a paid job as well.
I am hard line on not feeling sorry for projects going away, being taken over by organisations, when it mattered people should have actually sponsored them, instead of bosting how great is to get it all for free/gratis.
Every, single time, someone posts a cool paid project, there is the usual comment why pay, look at MIT/BSD/Apache/... project so and so.
This wasn't the first electronic reservation system. It was preceded by the Magnetronic Reservisor, custom built for American Airlines by Teleregister. It didn't use general-purpose computers. It had magnetic drums and remote terminals, but it was not general purpose.
Teleregister built a range of such special-purpose systems, for several airlines, railroads, and air traffic control. General purpose CPUs were both too expensive and too slow for these jobs in the early 1950s. Using a general purpose computer for everything didn't really happen until the minicomputer era in the mid-1970s. SABRE had to wait until general-purpose computers got better.
It's interesting that transaction processing operating systems died out. Tandem's OS worked that way. But the run transaction program once and flush it approach is almost dead. Except, amusingly, for CGI programs, which are true use-once transaction programs, with an inefficient implementation.
[1] https://www.youtube.com/watch?v=F4d-OFDs1hY
[2] https://s3data.computerhistory.org/brochures/teleregister.sp...
J and K are ASCII-based array languages which were inspired by APL.
Iran's state media reported that the F-15 rescue mission was a cover to steal enriched uranium, something which fits the facts a lot more than them constructing an airstrip in enemy territory and blowing up at least two MC-130s just to rescue a pilot:
https://economictimes.indiatimes.com/news/new-updates/did-us...
Also suspicious that Iran came to the negotiating table just a couple days after the F-15 mission after insisting for the other 5 weeks that there would be no negotiating and they were not even in contact with Washington.
> It is obvious to anyone that if Iran put all the resources they poured into secret nuclear facilities and missiles into economic development, infrastructure, and education, Iran would be in a completely different place today
Funny, I was just thinking that about the US.
> Eleventy might not receive new features, your website will still work.
The beauty of SSGs, in one sentence, folks.
I'm not aware of any CVEs in HTML, either.
That stuck out at me too, along with the em-dashes above.
https://soupault.net/ is about using plain HTML, but doing index pages, RSS feeds and so on from that. You even get away with not having frontmatter, because CSS like selectors allow those meta pages to retrieve title, date etc. from the HTML pages.
CheapCharts + iTunes Store + Apple TV (non-subscription) = zero ads and offline viewing.
The thing about SSGs is that you only need a small percentage of the functionality they offer and for what: so instead of some simple syntax for links you can remember in HTML
<a href="there">description</a>
there is something weird and irregular I always have to look up in the manual in Markdown and all sorts of other Markdown WTFs. Every time I tried to get started on a personal site with an SSG I would get depressed looking at hundreds of ugly themes, get depressed with the mysterious and crappy cloud-side build systems, get depressed with the prospect of customizing them, etc. So I'd start experimenting, never finish and come back six months to make another attempt that fails.When I really needed a landing page that looked like it fell off a UFO I did it in Vite-React (such a joy to use semantic components, like write
<Event date="2026-04-18">Earth Day Parade Ithaca Commons</Event>
and it is a simple python script that uploads the dist files to S3 (no "WTF went wrong with the github action") invalidates Cloudfront [1], extracts metadata, maintains the metadata database. There's a clear path to extending the system to do exactly what I want to do in the future unlike some SSG which I will have to relearn from scratch in six months when I want to make a big change... and had it up and running and in front of end users in a weekend.That is, SSG has no commercial potential because any individual or organization which is capable of maintaining and customizing an SSG can create one from scratch that does exactly what they need with less cost and effort and success is only possible through hypnotizing people into thinking otherwise -- in many fields of software this happens every day but I think not SSG, like those people are going to stay asleep and dream of Drupal and Wordpress.
[1] ... and if I want to move to some similar platform I just implement it instead of struggle with "plugins" and "modules" and other overcomplicated extension mechanisms
It has been a while (I think ever since Safari introduced Reader Mode), and I do almost all my reading on websites in Reader Mode. For some websites, I have set to “Use Reader Mode when Available,” such as that of paulgraham.com, daringfireball.net, and quite a few others with horrible Typography.
> Every year or so there's a new article about some new spectacular storage medium. Crystals, graphene, lasers, quartz, holograms, whatever. It never materializes.
Of course, wouldn't you expect that for a fairly mature technology that you'd get tons of false starts from competing tech before eventually getting one breakthrough that completely changed everything? I mean, you could have written a comment that was perfectly analogous to your paragraph above about how AI and neural networks never really amounted to much for about 50-60 years until, all of the sudden, they did (and even if you think AI may currently be overhyped, it's undeniable that in the past 5 years that AI has had an effect on society probably much greater than all the previous history of AI put together).
I prefer to read this academic paper as "Oh, this is a really interesting approach, I wonder what its limitations are" vs. interpreting at as a "this new storage tech will change the world!!!" announcement. I feel like the first approach leads to generally more curiosity, while the second just leads to cynicism and jadedness.
Y’all don’t believe me when I say it but LLMs are good at language because they’re bad at reasoning about probability.
Will it be possible to have a “Record” feature, say for a few minutes? Would be lovely to save videos, especially the landings.
> If I am unemployed long enough in America, I will eventually die.
Not wrong.
This is a brochure site from "The Alliance for Secure AI", which I am unfamiliar with, but whose site gives "AGI weirdo". Am I misreading?
What are they finding? Buffer overflows? Something else?
Also, if someone has the time and tokens, would they please run the OpenJPEG 2000 decoder through this tester? It's known to be brittle. The data format has lots of offsets, and it's permitted to truncate the file to get a lower-rez version. That combo leads to trouble.
It’s crazy that you can compile a custom kernel and it’ll boot and the GUI will run.
I am using an 8 year old phone that was mid when I bought it for ~$300 or so new. It's only in the last year that I've begun to find it annoyingly slow. Now I prefer using an actual computer for most things and only rely on the phone for messaging and maps when I'm out and about (plus some lightweight web browsing) but my point is that mediocre actually works fine. I have hardly any apps on it, if there isn't a web interface I don't need to interact with it.
I am about the same age and tarted loading programs off cassette tapes. The fact that I can get a terabyte of storage in a micro SD card the size of my pinkie nail for under $200 still impresses me.
> it wouldn't stop them from getting guns
Maybe I'm overestimating the difficulty of making guns. But I'm aware of zero conflicts in which small arms were manufactured in situ. Even in e.g. Myanmar/Burma. The fact that even remote conflicts go through the trouble of importing arms suggests this might be more difficult than you suggest.
"A scanning-probe prototype already constitutes a functional non-volatile memory device with areal density exceeding all existing technologies by more than five orders of magnitude."
Does that mean a scanning tunneling microscope is the I/O mechanism? That's been demoed for atom-level storage in the past. But it's too slow for use.
Better call Center for your ground speed [1].
[1] https://www.thesr71blackbird.com/Aircraft/Stories/sr-71-blac...
It's a reasonable conclusion that Earth has the most competitive market and cheapest prices in the solar system and that there is not anything which is worth taking back to Earth.
Artemis is not motivated by the profit motive but by scientific value, mogging other countries, and other non-material values. And yeah, it transfers money from the taxpayer to many private organizations that can in turn kick back some to politicians to keep the money going. Something like that.
The moon landing is a much more difficult mission than just circling around the moon and I'd argue that we don't have a realistic plan for the landing right now.
Here's a 10 year old study from Norway's navy.[1] It's a marginal technology for navies. It's more useful for defending urban areas from air strikes, because urban areas have lots of emitters, too many to attack.
There's a lot of current interest in what Iran is using for air defense. Search "Iran air defense passive radar". Some people speculate passive infrared. Some speculate passive radar. Nobody who's posting really knows, of course.
Ukraine uses many passive systems, including audio. Sort of like Shot Spotter, but for drones.
The headline is confusing the issue. Bitcoin miners are losing money because October's crash took Bitcoin from $126,000 to below $70,000, and the Iran war has pushed up oil and electricity prices. The minor difficulty drop is a result of that, as some Bitcoin miners drop out. It's not the cause.
Rome wasn't built in a day, and its computing and networking technology wasn't replaced in a day either.
It is how bitcoin is designed to work, but it also shows very directly how proof-of-work systems can never scale to be the global monetary replacement its boosters push. If the opposite happened, and the price for some reason sky rocketed to, say, $1 million per bitcoin, it would necessarily mean that it would induce more miners until the difficulty and consequent electricity cost (regardless of the efficiency in electricity generation) also would rise to the neighborhood of $1 million per coin. At the point you're far beyond "Argentina levels" of electricity and getting into "Europe levels" of electricity to run the network.
The electricity demand (and here I mean the overall cost of the electricity, so improvements in $ per kilowatt just mean you need to use more electricity) in proof-of-work systems fundamentally scales linearly with the overall valuation of the coins in the network, which means proof-of-work systems can never scale as large as their fanboys would have you believe.
I speculatively fired Claude Opus 4.6 at some code I knew very well yesterday as I was pondering the question. This code has been professionally reviewed about a year ago and came up fairly clean, with just a minor issue in it.
Opus "found" 8 issues. Two of them looked like they were probably realistic but not really that big a deal in the context it operates in. It labelled one of them as minor, but the other as major, and I'm pretty sure it's wrong about it being "major" even if is correct. Four of them I'm quite confident were just wrong. 2 of them would require substantial further investigation to verify whether or not they were right or wrong. I think they're wrong, but I admit I couldn't prove it on the spot.
It tried to provide exploit code for some of them, none of the exploits would have worked without some substantial additional work, even if what they were exploits for was correct.
In practice, this isn't a huge change from the status quo. There's all kinds of ways to get lots of "things that may be vulnerabilities". The assessment is a bigger bottleneck than the suspicions. AI providing "things that may be an issue" is not useless by any means but it doesn't necessarily create a phase change in the situation.
An AI that could automatically do all that, write the exploits, and then successfully test the exploits, refine them, and turn the whole process into basically "push button, get exploit" is a total phase change in the industry. If it in fact can do that. However based on the current state-of-the-art in the AI world I don't find it very hard to believe.
It is a frequent talking point that "security by obscurity" isn't really security, but in reality, yeah, it really is. An unknown but presumably staggering number of security bugs of every shape and size are out there in the world, protected solely by the fact that no human attacker has time to look at the code. And this has worked up until this point, because the attackers have been bottlenecked on their own attention time. It's kind of just been "something everyone knows" that any nation-state level actor could get into pretty much anything they wanted if they just tried hard enough, but "nation-state level" actor attention, despite how much is spent on it, has been quite limited relative to the torrent of software coming out in the world.
Unblocking the attackers by letting them simply purchase "nation-state level actor"-levels of attention in bulk is huge. For what such money gets them, it's cheap already today and if tokens were to, say, get an order of magnitude cheaper, it would be effectively negligible for a lot of organizations.
In the long run this will probably lead to much more secure software. The transition period from this world to that is going to be total chaos.
... again, assuming their assessment of its capabilities is accurate. I haven't used it. I can't attest to that. But if it's even half as good as what they say, yes, it's a huge huge huge deal and anyone who is even remotely worried about security needs to pay attention.
You can rationalize anything by only considering the upside relative to alternatives' downsides.
You're being nice about it but I think you're inadvertently expressing literally the sentiment Dan was referring to.
Write up that stuff on a file and tell the agent to look at it. Say “take a look at file A as an example of how we do this sort of thing.” Good comments in the code help that explain how it works and what that patterns are are helpful, but you don’t need to go line by line duplicating the code.
"Stand on Zanzibar" (1969 Hugo Award Winner) — John Brunner
Great book.
>OK, so what are these recordings all about? Why are they here?
>Greetings fellow web trippers, my phone phreak handle is Mark Bernay and 35 years ago I used to go on phone trips. Yes, it's true: just like the people in the picture at the top, I would drive around to small towns primarily for the purpose of playing with their payphones. I often brought along my trusty Craig 212 portable 3-inch reel-to-reel tape recorder (this was before cassettes were popular) to record the phone noises and narrate information about them for my friends. I don't go on phone trips anymore and you are probably thinking that this is because I grew up, but no, I never did. The reason I stopped phone tripping is that all phones are about the same all over the country nowadays and they are really boring.
>This picture shows my recording equipment around 1968, which I used to edit these tapes and prepare them for playing on a public phone number. My current desk is just as messy, but with PC's instead of reel-to-reel tape recorders.
>There have been 1237108 accesses to this page.
.........................
>Secrets of the Little Blue Box (1971)
https://www.ckts.info/downloads/articles/Esquire%20Magazine%...
Or the exact opposite. Cato are just neoliberal elite shills. Anything that hurts the bottom line of that class is "bad".
Maybe try some water/juice drinking bottles with a cap and/or a large straw?
If you cut out the vulnerable code from Heartbleed and just put it in front of a C programmer, they will immediately flag it. It's obvious. But it took Neel Mehta to discover it. What's difficult about finding vulnerabilities isn't properly identifying whether code is mishandling buffers or holding references after freeing something; it's spotting that in the context of a large, complex program, and working out how attacker-controlled data hits that code.
It's weird that Aisle wrote this.
Almost everything you use came from academics and research labs.
Half-broken visualizations look suspiciously like ones I get from LLMs, so who knows, maybe it's some LLM setting up their first blog?
I don't need to conduct 1000 transactions per day. I don't forsee a world in which it will be some sort of fatal inconvenience to need to approve all purchases. I certainly don't plan on ever just handing over my credit card to an LLM, due to its fundamental architectural issues with injection, and I still don't anticipate handing it over to any future AI architecture anytime soon because I struggle to imagine what benefits could possibly be worth the risk of taking down such a basic, cheap barrier.
All that stuff about support, though, inevitable.
> That source is less than a million was bet, and now there are hundreds of millions bet on the ceasefire, and millions more on other Iran bets, and then even more bet on Russia/Ukraine.
I'd imagine those numbers are typical for any transaction facilitated by the Internet comparing 2003 to 2026.
If costs stay high, then people will drop out of bitcoin mining, which will cause supply to go down and bitcoin prices to go up.
"... fan's recordings of 10k concerts..."
59-year-old Aadam [sic] Jacobs made his first recording 42 years ago in 1984 when he was 17.
He would have had to average 238 recordings/concerts per year — nearly 5/week — over those 42 years to accumulate 10,000 of them.
Give it a bit!
https://www.npr.org/2026/03/26/g-s1-115240/iran-war-strait-h...
(I'm being snarky here, but COVID definitely exposed some supply chain vulnerabilities.)
You're missing that humans are often irrational.
They may be hoping it goes back up.
https://en.wikipedia.org/wiki/Pardon_of_Richard_Nixon
https://www.presidency.ucsb.edu/documents/proclamation-4311-...
> Now, Therefore, I, Gerald R. Ford, President of the United States, pursuant to the pardon power conferred upon me by Article II, Section 2, of the Constitution, have granted and by these presents do grant a full, free, and absolute pardon unto Richard Nixon for all offenses against the United States which he, Richard Nixon, has committed or may have committed or taken part in during the period from January 20, 1969 through August 9, 1974.
Not quite as long, but much more significant. (No violence exception, the criminal was the President, and they were crimes against the entire country, not some random drug/tax charges.)
It's a gap, but not due to lack of trying.
I made https://github.com/TeMPOraL/cloze-call a little over 16 years ago, and this itself was inspired by something then at least that much old.
Screenshot: https://jacek.zlydach.pl/old-blog/download/projects/ClozeCal...
Wonder if I can turn this into browser-playable version with just LLMs.
EDIT: Put Claude Code on the task (reason for choice: Claude Desktop lets me just throw it at a folder with unzipped bundle of sources and assets I found laying around my blog archive).
EDIT2: Holy shit it worked. Will upload it somewhere soon.
EDIT3: Here it is, in its full 800x600, 30 FPS cap glory: https://temporal.github.io/ClozeCall-Web/
The process I used was, have CC run over the original sources and create this document:
https://github.com/TeMPOraL/ClozeCall-Web/blob/main/design.m...
Then after verifying it matches what I remembered and clarifying some decisions (section 4 and 5), just told it to make a static client-side no-build-step no-webshit-frameworks game deployable to github.io, and it did it in a single shot (+ a second small request to add a fix to transparency of some assets). Personally, I'm impressed at how well it went, what a nice highlight of the weekend for me.
While a great improvement, the article also gives a good overview that cargo isn't without issues when going outside pure Rust desktop/server scenarios, maybe some ideas for improvements.
From one of the ground staff for Artemis: https://bsky.app/profile/captnamy.bsky.social/post/3mi36brfw...
"1968 and the country was on fire. Vietnam. Assassinations. Civil unrest. Protests.
Apollo 8 was the one bright event of a terrible year.
2026 and the country is on fire. Iran. Corruption. Fascists. Civil unrest. No Kings.
I hope Artemis II will stand out as a bright spot for our country."
Some more background on her: https://chicago.suntimes.com/news/2026/04/01/chicagoan-amy-l...
I liked this game a lot!
One thing I noticed is that I found the game to be pretty hard if I just tried to tap based on where I thought was a good "launching point". But then I realized I could use the dashed lines in the orbit circle as basically "arrows" pointing to where the ship would go if launched at that point in the orbit, and I instantly got much better if my strategy was (a) pick the dash in the orbit circle that points to the next planet, and (b) just then only focus on tapping when the ship hits that dash in the circle.
I think a "hard mode" would be to get rid of the dashes in the orbit circle and just make it a solid line.
Yeah, but they still don’t have a realistic plan to land astronauts there.
Like the space shuttle before it, Artemis proves that nobody can beat the US at spending money on boondoggles.
Lunar missions are inconsequential to problems here on Earth like we can’t afford to build high-speed rail and transit, that we can’t build housing affordable or otherwise, that we already lost the next war to Boeing and Lockheed-Martin, won’t build affordable electric cars, etc.
What we need is affordability porn!
I will still wait for the heat shield analysis. Doing a crewed flight was not what I would have done - I’d use a Falcon Heavy to put one or more dummies through different trajectories to make sure we have enough experimental data to extensively model the shield behaviour, especially in non-nominal entries.
I would love to see a Unibody polished to a mirror finish. Would be a perfect match for Queen Amidala’s shuttle.
I only learned to use slide rules, and already forgot everything about them.
It becomes more interesting when you couple a flexible display with it.
Tools have existed for decades, devs have to actually use them.
Packing structures can improve performance and overall memory usage by reducing cache misses.
People still think that this administration will play fair the next elections.
Usually socialist revolutions fail because nobody can agree on who the new leaders should be. Workers seize control of the means of production...and then what? Who determines what they should do with it? Who do they look to for guidance? If you elect/appoint/select someone, now they are the new capitalist. If you don't, the machinery sits idle while various factions fight amongst themselves.
We saw this with Occupy Wall Street and the CHAZ in the U.S - these protests didn't fail because they were crushed, they failed because local police basically let them win and then once they won different factions had different ideas of what to do next. We also see it at the state level with the Soviet Union (where a strong dictatorship did eventually emerge - the communist revolution didn't mean everybody was equal, it just meant some people were more equal than others) and in Vietnam (which became intensely capitalist less than 15 years after the communists won.
The function of the business owner, CEO, or other executive figure is simply to be a symbol of which direction the organization needs to go. They don't do any work themselves, and they are selected for their ability to look pretty and shout platitudes that other people follow. But that symbol is needed to actually get the people moving in one direction.