What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
I have a counter-study with size n=1: I did all my recovery from tonsilectomy on paracetamol and definitely noticed it working. That was however on the maximum safe dose.
(one of the major problems with paracetamol is that the effective dose is only a few multiples away from the dose which starts to cause liver damage! It is by a long way the most dangerous OTC drug)
indeed. riscv for instance. also, afaik, xor’ing is faster. i would assume that someone like mr. raymond would know…
Yeah, maybe it'll sober some people up to stop pretending they can't see how useful LLMs are in pretty much everything. A sharp tool to wield, easy to cut yourself with, but also extremely useful.
Honestly, it felt much stranger to me to learn, a few years ago, that they're 3D-printing rocket engines. With my experience limited to building my own PLA/ABS 3D printer out of salvaged motors and parts printed on another printer, it was hard to imagine how this is anywhere near safe and precise enough. But turns out, FDM-ing some plastic blobs is not the same as fusing Inconel powder with lasers. Same with using LLMs for software engineering (whether in aerospace culture or otherwise), it's just not the same as asking ChatGPT "please make me an app to do something idk how i cannot code send halp".
This is the best kind of science there is: direct, empirical test.
That doesn't make it better! It did somehow slow down the regulatory response because politicians are dumb, though.
Has the availability of deepfake porn generation reduced the demand for deepfake porn featuring real people? When deepfake generators are capable of creating convincing imagery of flawless ideal fake humans, why do you suppose there’s so many real humans who report being non-consensual subjects of deepfake porn?
Some politician will be recorded doing something & he'll have his people release a thousand photos/videos of him doing crimes. And they'll say, look, it's a smear campaign.
And his enemies will do the same, hopefully resulting in less blind trust for everyone in the population, which can only be a good thing.
because it emulates an Ethernet packet driver but it is really just SLIP over a serial port
Today, the same is true of many other physical-layer protocols that developed later, such as WiFi and the GSMA mobile standards; they seem to have converged on the Ethernet frame format at the software interface, presenting the appearance of an Ethernet NIC to software, because that's the easiest way to make use of existing network stacks. There's also the weirdness of tunneling protocols like PPPoE which only exist to tunnel Ethernet through non-Ethernet systems.
OP makes that concession in the first section of the post. (I may or may not have made a similar comment before deleting in kneejerk shame)
A former trustee in the inner-ring suburb in which I live owns and manages rental housing throughout the municipality and is a vocal opponent of building new housing, and of the argument that supply and demand functions in the housing market. I could screenshot him for you, but you have no idea who he is, so: just take my word for it, these aren't "alleged" people. They're a major force in local politics around the country, which is where this fight is primarily being fought.
Comments like this don't fill me with confidence: https://github.com/brexhq/CrabTrap/blob/4fbbda9ca00055c1554a...
// The policy is embedded as a JSON-escaped value inside a structured JSON object.
// This prevents prompt injection via policy content — any special characters,
// delimiters, or instruction-like text in the policy are safely escaped by
// json.Marshal rather than concatenated as raw text.
Big pie chart is labelled "The share of each source that was used to meet changes in energy demand vs. 2024". What does that mean?
What's "AI"?
(I'm going to guess you mean generative AI such as image/video/text generation used to create slop on Facebook, but I really wish posts like this would clarify.)
I was sick about this before y'all because I was involved in three efforts to try to commercialize foundation models before the technology was ready.
Most of all I am sick of people being sick of it!
Does anyone know whether the AMD chips are more performant? I like AMD more as a company, especially since I like healthy competition. I'd prefer an AMD chip if it's as efficient and performant as the Intel ones.
> Ford’s CEO said that if Chinese EVs are allowed into USA it will destroy the US automakers
I'm a strong advocate for giving Chinese EVs an import quota per manufacturer (with a 1.5mm-unit annual cap on total Chinese EV imports, downgradable to 1mm in a recession, representing about 10% of demand).
This gives American consumers–and designers–access to and a taste for what the competition is doing. But it preserves a moat for our own producers.
Now we'll just get teabagged by killer robots for the lolz.
> this is an Israeli initiative
The source shows an Israeli person. That doesn't make it an "Israel operation." It looks like the site of every small nonprofit I've ever seen.
A/B tests only work if the subjects don't realize they are in a A/B test.
They later said: https://twitter.com/TheAmolAvasare/status/204672549859272297...
> When we do land on something, if it affects existing subscribers you'll get plenty of notice before anything changes. Will hear it from us, not a screenshot on X or Reddit.
If you don't want things like this spreading through screenshots of X and Reddit, don't run "tests" like this in the first place!
(Also "if it affects existing subscribers" is a cop-out, I need to know the pricing of Claude Code for NEW subscribers if I'm going to adopt it at a company with a growing team, or recommend it to other people, write tutorials etc.)
> What else?
I used to have an assistant make little index-card sized agendas for gettogethers when folks were in town or I was organising a holiday or offsite. They used to be physical; now it's a cute thing I can text around so everyone knows when they should be up by (and by when, if they've slept in, they can go back to bed). AI has been good at making these. They don't need to be works of art, just cute and silly and maybe embedded with an inside joke.
It's a mission, not a business plan. Colonising Mars was always a moonshot as well. But it aligned the company's priorities.
My point is regardless of what you think of a Dyson sphere, this theory seems to predict what the company does better than assuming everything's a ketamine fever dream.
> vast majority of the cost is hiring teachers
My 1,500-student public California high school currently lists 7 administration-team members (principal, executive assistant, three assistant principals, school-facilities manager and food-services manager) and 11 administrative-support members (school data-processing specialist, print-center technician, senior-clerical assistant, separate registrar and attendance roles, interventions-support specialist, and others). That doesn't include 4 site maintenance, a network-support and a separate network-systems specialists; a separate media-library specialist; 2 psychologists; a college and career advisor; 4 school counselors; a wellness-space support specialist; and a social science and an athletic director.
34 administrative hires. One per 44 students. Many of those roles strike me as fluff.
The UNESCO standard is meant for developing countries.
In 2021, California spent about $121 billion on K-12, out of a GDP of $3.4 trillion, or about 3.5% of state GDP. That puts it above the OECD average of 3.3%, around the same as France at 3.5%. blob:https://www.oecd.org/702dcc03-0749-41b6-af41-112fd1af1bfb. (This is the parent page: https://www.oecd.org/en/data/indicators/public-spending-on-e.... You have to select non-tertiary education, which is basically what we call K-12.)
It's a bit of a pickle, given that managing your inbox (or at least reading it, classifying and summarizing contents, identifying action items etc.) is one of the most valuable applications of LLMs today, especially if you move beyond software developers having LLMs write code for them.
No. When code is cheaper to generate than to review, it's cheaper to take a (well-written) bug report and generate the code yourself than to try to figure out exactly what the PR does and whether it has any subtle logical or architectural errors.
I find myself doing the same, nowadays I want bug reports and feature requests, not PRs. If the feature fits in with my product vision, I implement and release it quickly. The code itself has little value, in this case.
How is this constitutional lol. Especially the age discrimination aspect.
People had literally the same arguments about Amazon, a company Matt Yglesias once described as "a charity run on behalf of the American consumer by the finance industry".
Did you pay the sticker price or the intended price?
Over here in Poland we have a law that the store must sell you the good for the price it advertised, so in that case they'd be forced to accept $4.50 because of their mistake. May sound too biased in favor of the customer, but before that, the "errors" in price tags were more common.
... been that way for a real long time. Somebody ought to start a startup for an AR app that replaces those billboards with other billboards you might find in some other kind of city, say Boston or Philadelphia or Atlanta so people in SV can have a little empathy for "the rest of us".
Same in Germany, it ends around similar Apple and Thinkpad prices.
I think of this movie
https://en.wikipedia.org/wiki/Silent_Running
which could be read as an expression of 1970s environmentalism or something like 2001 A Space Odyssey where a crew member runs amok instead of the computer.
Seriously. Especially since self-checkout is almost always with a card tied to your identity, not cash.
Depending on the value, the police probably aren't going to show up at your address, but use that card again at the store in the future and you might find the security guard coming over. Or, like many stores, they wait for you to do it repeatedly until it adds up to enough for a felony instead of just a misdemeanor, and then they bring felony charges...
The stores have cameras. Likely someone is well aware those weren't all bananas, and has it on video.
Play stupid games, win stupid prizes.
Oddly a lot of people just don't see things like that, just like my boomer relatives didn't seem to notice that SDTV TV programs looked stretched out on an HDTV. (Did this normalize obesity?)
5.4 thinking says "Just right of center, immediately to the right of the HAM RADIO shack. Look on the dirt path there: the raccoon is the small gray figure partly hidden behind the woman in the red-and-yellow shirt, a little above the man in the green hat. Roughly 57% from the left, 48% from the top."
(I don't think it's right).
I've been trying out the new model like this:
OPENAI_API_KEY="$(llm keys get openai)" \
uv run https://tools.simonwillison.net/python/openai_image.py \
-m gpt-image-2 \
"Do a where's Waldo style image but it's where is the raccoon holding a ham radio"
Code here: https://github.com/simonw/tools/blob/main/python/openai_imag...Here's what I got from that prompt. I do not think it included a raccoon holding a ham radio (though the problem with Where's Waldo tests is that I don't have the patience to solve them for sure): https://gist.github.com/simonw/88eecc65698a725d8a9c1c918478a...
Model card for the API endpoint gpt-image-2 (which may or may not reflect the output from ChatGPT Images 2): https://developers.openai.com/api/docs/models/gpt-image-2
API Pricing is mostly unchanged from gpt-image-1.5, the output price is slightly lower: https://developers.openai.com/api/docs/pricing
...buuuuuuuuut the price per image has changed. For a high quality image generation the 1024x1024 price has increased? That doesn't make sense that a 1024x1024 is cheaper than a 1024x1536, so assuming a typo: https://developers.openai.com/api/docs/guides/image-generati...
The submitted page is annoyingly uninformative, but from the livestream it proports the same exact features as Gemini's Nano Banana Pro. I'll run it through my tests once I figure out how to access it.
Maybe for right now, but even in the very near future it seems like data center expertise would absolutely be a core competency of any AI leaders.
Heck, look at Facebook. Granted, they got started slightly before AWS, but not by much. Owning all of their own data centers is a huge competitive advantage for them, and unlike most of the other hyperscalers they don't sell compute to other companies (AFAIK).
Again, the commitment is for $100 billion in spend. Building lots of data centers for a lot cheaper than that price should absolutely be doable. Also, geographic distribution isn't nearly as important for AI companies given the way LLMs work. The primary benefit of being close to your data center is reduced latency, but if you think about your average chatbot interface, inference time absolutely swamps latency, so it's not as big a deal. Sure, you'd probably need data centers in different locales for legal reasons, and for general diversification, but, one more time, $100 billion should buy a lot of data centers.
One can observe that even the proposed alternative is still awfully complicated compared to the web apps I remember making circa 1999. Much more capable, but still more complicated. Of course the entire "typescript -> compile -> minimize -> pack -> etc." pipeline is yet more complicated.
There is some essential complexity that will arise in this space because a client/server app fundamentally can't abstract over the client/server division. There's not much you can do about that... well... you can overabstract and try too hard to wipe it away, and fail, and make something that will be worse to use than an approach that acknowledges and understands the distinction, which is a modestly popular approach... but there's no way to get rid of it entirely.
There is some complexity that is going to be essential in an app context where the DOM is not exactly the best API for interacting with an application, and there is always complexity where there is an impedance mismatch.
Those two things alone are non-trivial on their own. Exactly how much they account for the complexity of the current approaches is up for debate, though.
At the risk of incurring some ire in replies, it's not clear to me that if someone sat down with a clean sheet of paper and tried to create a new platform that roughly matched the current web platform in capability that they could do that much better. There's a lot of deprecation that could be trimmed off for sure, but perhaps for the purposes of this discussion we wouldn't count that against the current platform too hard. (The new platform will only be missing it by virtue of being too young to have it; over time it'll pick it up too.) Maybe building in some sort of reactive-programming capability at the base. A better version of web components that works well enough to be the de facto standard and prevent too much competition from emerging. But whatever data structure you use to access your app, it's still going to have roughly the complexity that we have today. You could do some better. But I'm not sure you could do that much better, such that it would be heaven compared to today. It's still going to be huge and complicated and have all sorts of weird interactions when you try to put the pieces together.
Trying to build an app in a language that is also trying to be a high-powered layout language (it's not the best, not exactly "commercial quality", but it's pretty capable) that is also a document standard that is also the de facto VM for the world is not going to be simple.
The HTMX route that this article is advocating has some value.
My YOShInOn RSS reader works that way and I think it is great -- but it is my own thing and I don't have customers and managers coming to me with requirements and I can build everything with that architecture in mind.
As I see it the basic front end problem is that you click on something and then the page is updated and this updating could be really simple (like you are typing into an autocomplete box and search results appear under it) or it might impact a large number of elements spread all over the page and some applications might be very dynamic and updating dependent on the UI state and can't be figured out ahead of times (imagine a developer tool which has lots of property sheets and tool windows)
HTMX is very simple for the simplest cases, requires some back end framework for harder cases (like a page might have 20 partials in it when it draws for the first time, 3 partials need to be redrawn when you click on something, you need to format a response packet that draws those 3 partials in the right place) and breaks down for the hardest cases. Part of the React puzzle is that we often use React for apps that don't need its full power but hey, even for something CMS-adjacent why fight with unintuitive Markdown (face it!) when you could write
<MyElement attributeThatMattersToMe="yes">Here's the content</MyElement>
which conforms to your needs.As much as I love HTMX, I got into it when my dissatisfaction with React was at its peak, more recently React is my go-to for anything from whimsical personalized landing pages to biosignals application that use Web Bluetooth, USB and Serial. Why? I use it at work all day and know how to get things done. I can draw anything at all, even 3-d worlds with A-Frame. That frustrating build system is clearing up.
This was probably partly a Google refresh token theft (given the length of the access). No inside info, just looking at how the attack occurred.
OAuth 2.1[0] (an RFC that has been around longer than I've been at my employer) recommends some protections around refresh tokens, either making them sender constrained (tied to the client application by public/private key cryptography) or one-time use with revocation if it is used multiple times.
This is recommended for public clients, but I think makes sense for all clients.
The first option is more difficult to implement, but is similar to the IP address solution you suggest. More robust though.
The second option would have made this attack more difficult because the refresh token held by the legit client, context.ai, would have stopped working, presumably triggering someone to look into why and wonder if the tokens had been stolen.
0: https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1
The risk of forking an actively maintained OSS repo for commercial purposes is that you have to compete with the source repo and in this particular industry where business moats are weak/nonexistent, it's tough to differentiate.
That said, pivoting to a cloud-hosted agent might be even more difficult to differentiate.
I've been maintaining an abstraction layer over multiple providers for a couple of years now - https://llm.datasette.io/
The best effort we have to defining a standard is OpenAI harmony/responses - https://developers.openai.com/cookbook/articles/openai-harmo... - but it's not seen much pickup. The older OpenAI Chat Completions thing is much more of an ad-hoc standard - almost every provider ends up serving up a clone of that, albeit with frustrating differences because there's no formal spec to work against.
The key problem is that providers are still inventing new stuff, so committing to a standard doesn't work for them because it may not cover the next set of features.
2025 was particularly turbulent because everyone was adding reasoning mechanisms to their APIs in subtly different shapes. Tool calls and response schemas (which are confusingly not always the same thing) have also had a lot of variance - some providers allow for multiple tool calls in the same response, for example.
My hunch is we'll need abstraction layers for quite a while longer, because the shape of these APIs is still too frothy to support a standard that everyone can get behind without restricting their options for future products too much.
Looks more similar to routines for me (just launched the other day): https://code.claude.com/docs/en/routines
I bought a newer iPhone. My older one had the button to go to the home screen, the newer one replaced that with swipe up.
After a year, the swipe up is still a nuisance. It often doesn't work, and I have to swipe up several times.
Have you looked at DBOS for the durable execution layer? Much easier to integrate and not nearly as heavyweight for you/your customers.
Nowadays you can make use of some transducers ideas via gatherers in Java, however it isn't as straightforward as in plain Clojure.
People used to be able to smoke in pubs. But I agree it wasn't quite so culturally foundational.
I'm not going to lose sleep over the idea of a smoking ban, since it was already driven to the margins, but the implementation of it by age is really weird. Clearly a move to avoid annoying pensioners, like everything else.
> 9 of the 10 senior developers didn't know how many bits were in basic elemetary types
That's likely thanks to C which goes to great pains to not specify the size of the basic types. For example, for 64 bit architectures, "long" is 32 bits on the Mac and 64 bits everywhere else.
The net result of that is I never use C "long", instead using "int" and "long long".
This mess is why D has 32 bit ints and 64 bit longs, whether it's a 32 bit machine or a 64 bit machine. The result was we haven't had porting problems with integer sizes.
The MNT Pocket Reform keycaps are gorgeous. I wonder why they didn't go with these for the bigger model and keyboard.
I used to love Brooks Brothers shirt, I would get them used and wear them to rags and they were the best.
The Duluth Trading Company runs cringe ads in my opinion but I traded my evil twin's old black Carhartt coat for a red Duluth coat that my son got from his last employer with a small monogram for my winter phase foxographer costume and it is 100% great.
On the AI/Gemini and the eventual replacement for an internal stack, Apple has done that before with Apple Maps.
At the start people laughed at the melting bridges and the airport in a farm (the popular Airfield farm in Dublin, which we visited countless times with our daughter and their friends), but, in the end, it's a competent replacement for Google Maps.
Apple is betting that good enough will get cheaper - with cheaper training, and that it will be possible to run good enough inference with local models fine tuned on the device with data you have on your iCloud. Google will still have their colossal structure and these huge deployments will, clearly, get us to superhuman levels of artificial intelligence, but that's a lot more than good enough.
As the MacBook Neo demonstrates, sometimes the brains of a phone is all you need for a desktop computer, and, if that's good enough for you, it makes no sense to get a Mac Studio with 256GB of memory, unless you want it to tune your iPhone's models in seconds rather than overnight on the charger.
Expectable, given that LiteLLM seems to be implemented in Python.
However kudos for the project, we need more alternatives in compiled languages.
Stealibg OAuth keys from first party app to impersonate it in order to not have to pay for usage with properly generated API key was never part of normal use anywhere.
The trouble here is not that the age checking is right or wrong but it would be unethical for anyone who has the competence to develop this kind of app to work on it because it is fundamentally unworkable -- it would be like me taking money from somebody to help them with their perpetual motion machine.
The kind of developer you are going to get is either going to be somebody who knows what time it is and cynically works on a project that they know is going to fail (unethical) or someone who is not going at it with "the end in mind" but is just cosplaying as a software developer (incompetent)
I didn't say "easy".
Yeah I don't understand it, it's a marathon with three companies perpetually a minute ahead, and people keep saying "I expect the stragglers to catch up".
The only thing I can see them meaning is what you said, "in a minute the stragglers will be where the leaders were a minute ago", which, yeah, sure.
Except the part of them actually being native to the OS they are running on.
When people discuss this subject, I wonder what they think the counterfactual world would have looked like. Do people think China could have been kept backwards forever? I notice nobody goes around accusing Maurice Chang of doing this. Or W Edwards Deming.
This reminds me of the collapse of the Gros Michel banana variety, also due to disease. Near-100% loss of a food crop, even a luxury one, is an alarming thing to see though.
(I was wondering if climate change would be mentioned, but that doesn't seem to be critical there yet. Starting to be noticed in European grape terroir.)
So .. ICBM nuclear exchange? Or are we suddenly expecting a large conventional war between nuclear powers in which both sides decide not to bother with them?
TIL Scotland Yard is the Metropolitan Police. I thought it was its own thing named "Scotland Yard" for some reasons I never bothered to investigate.
Which kind of proves your point.
It is getting better, now that they finally got the Smalltalk lessons from 1984.
"Efficient implementation of the smalltalk-80 system"
> Now I finally no longer get as many ads for pornography and illegal drugs
Always fun when the ads have less rigorous content policy enforcement than the actual videos.
I occasionally use non-adblocked youtube, and I think the ads were mostly Audible/Jet2 holidays? I don't think I've ever seen a local business ad on there, presumably because making a video ad in the first place is expensive.
The places for local businesses seem to be Facebook and Instagram.
Building one of these is pretty trivial, especially in the AI era. This one looks like a wrapper for an R2 bucket. The nasty bit is what happens when they start getting abused.
Were users assured that the selfies they emailed to support would not be retained? I'm loath to defend the multimillion dollar corporation, but let's at least be fair.
Disappointed but not surprised. Their intent is not to comply, so you'll have to sue them at every step for every atom of compliance.
Link to underlying paper: https://arxiv.org/html/2604.15316v1
Apple basically spearheaded the war on general computation. Before them, phones used to be more or less open, Apple cracked down on that very quickly.
Same, however I do conceed having the whole assembler toolchain written in Python was also kind of cool, even if it may have been AI generated.
Even cooler would have been to have the 6502 directly generated from the LLM.
I am overjoyed to see this story here, we haven't gotten a lot of these hacks lately. Well done!
It's inherent in the way LLMs are built, from human-written texts, that they mimic humans. They have to. They're not solving problems from first principles.
How does that transalte to VMs? If "encryption at rest" is done at the guest level, instead of (or in addition to) host, that would be pretty close to minimal "encrypted except when it use" time and protect against virtual equivalents of pulling a hard drive out of a data center.
It never worked with the few books I got from Amazon, because I could not get them anywhere else. I usually only buy epub/pdf.
I think you mean country/region capitals, or countries like Germany.
I can assert than this isn't a thing in most Portuguese big cities, although it would be great to have it.
And proudly written in C++!
I like having C++ on my toolbox, but when Bjarne Stroustoup proudly talks about "F-35 Fighter Jet’s C++ Coding Standards" I am not sure it lands how he thinks it does, given how it turned out to be.
Quite certain that it also contributed to all the software glichs F-35 suffers from.
I tried out with a tiny project. It is the muscle memory built up with Git that kicks in and wish that JJ does it. I went through all the raw mistakes and doing things the hard way with Git, that, my mind plays trick trying to use JJ with the Git mindset. For now, I have mapped all of my Git aliases to JJ equivalent. But I would like to learn it the right way and do it the JJ way. This is going to take time, I’ll go slow.
It can't really lose to git, because underlying it is git
I mean, sure, that can happen, but that obviously depends on what the test is testing, it's not like it's bad in all cases to say "now plus 1 year". In the case in question it's really just "cookie is far enough in the future so it hasn't expired", so "expire X years in the future from now" is fine.
I appreciated John Gruber's piece on this: https://daringfireball.net/2026/04/another_day_has_come
That's because anonymity is the entire point of Monero. Of course legal vendors don't like anonymity, every government wants to be able to track every transaction anywhere.
Saying Monero hasn't been able "to overcome this" is like saying boats have been unable to overcome driving on roads. Technically true, but very much not the point.
Around 1900. They were held in very dubious regard in the early days of development.
https://www.usni.org/magazines/proceedings/1941/january/chap...
On British naval luminary compared submarine warfare to piracy, leading to the emergence a few years later of a tradition of Royal Navy submarine captains flying the Jolly Roger after completing successful missions.
I do wish Intel would make the other string instructions faster, just like they did with MOVS, because the alternatives are so insanely bloated.
it is never used with a prefix (the value would be overwritten for each repetition)
...which is still useful for extreme size-optimisation; I remember seeing "rep lodsb" in a demo, as a slower-but-tiny (2 bytes) way of [1] adding cx to si, [2] zeroing cx, [3] putting the byte at [cx + si - 1] into al, and [4] conditionally leaving al and si unchanged if cx is 0, all effectively as a single instruction. Not something any optimising compiler I know of would be able to do, but perhaps within the possibility of an LLM these days.
the audience will just prompt the AI themselves and cut out the middleman
It seems like "personalised recommendations" are heading in that direction, but don't forget there's also the social aspect --- listeners will want to share what they liked the most, so even if they end up automatically prompting the AI to generate exactly the music they want, they'll find others who also like very similar music.
The reality is, essentially nobody makes money by creating music. Taylor Swift, you might say, is a billionaire. Is it from selling music? Nope, it's from selling tickets to her shows. People want to see her perform live. A Taylor Swift impersonator would make no money singing the same songs. A cover band wouldn't do any better.
It's the same with authoring books. Almost nobody makes any significant money off of them. It's so paltry I don't really understand why authors are so concerned about copyright infringement.
People steal my copyrighted stuff all the time. I long ago stopped caring about it. But I do very much like Github as it protects me from others accusing me of stealing their code.
If you want to make money, you'll need a plan that does not require copyright protection.
This makes sense. The 1-bit model implies needing 2x as many neurons, because you need an extra level to invert. But the ternary model still has a sign, just really low resolution.
(I've been reading the MMLU-Redux questions for electrical engineering. They're very funny. Fifty years ago they might have been relevant. The references to the Intel 8085 date this to the mid-1970s. Moving coil meters were still a big thing back then. Ward-Leonard drives still drove some elevators and naval guns. This is supposed to be the hand-curated version of the questions. Where do they get this stuff? Old exams?)
[1] https://github.com/aryopg/mmlu-redux/blob/main/outputs/multi...
> But you know what's also frustrating? Code bases which involve multi-step manual steps to build.
Yes. That's what bash scripts are for.
and you can’t even find anyone who will fit them for the older models.
I'm quite certain you can find many companies in the far East who will produce cells of exactly the size and shape you want, as long as you're willing to order a minimum quantity. There are also a few semi-standard sizes of prismatic cells available.
That said, having a few truly standard sizes like we had with 1.2/1.5V and 9V batteries would be a good idea. BL-5C and its variants were a de-facto standard for many years too, and apparently are still available new.
Tell me your browser is a disaster without telling me your browser is a disaster.
Good point.
Then again, whatever process we're using, evolution found it in the solution space, using even more constrained search than we did, in that every intermediary step had to be non-negative on the margin in terms of organism survival. Yet find it did, so one has to wonder: if it was so easy for a blind, greedy optimizer to random-walk into human intelligence, perhaps there are attractors in this solution space. If that's the case, then LLMs may be approximating more than merely outcomes - perhaps the process, too.
Occlusion culling is really tough in systems where users can add content to the world. Especially if there's translucency. As with windows (not Windows), or layered clothing.
You're in a room without windows. Everything outside the room is culled. Frame rate is very high. Then you open the door and go outside into a large city. Some buildings have big windows showing the interior, so you can't cull the building interior. You're on a long street and can look off into the distance. Now keep the frame rate from dropping while not losing distant objects.
Games with fixed objects can design the world to avoid these situations. Few games have many windows you can look into. Long sightlines are often avoided in level design. If you don't have those options, you have to accept that distant objects will be displayed, and level of detail handling becomes more important than occlusion. Impostors. Lots of impostors.
Occlusion culling itself has a compute cost. I've seen the cost of culling big scenes exceed the cost of drawing the culled content.
This is one of those hard problems metaverses have, and which, despite the amount of money thrown at the problem, were not solved during the metaverse boom. Meta does not seem to have contributed much to graphics technology.
This is much of why Second Life is slow.
Ukraine's top drone commander was interviewed by The Economist.[1] He used to be a commodities trader, and he looks at warfare from that perspective. His goal is to kill Russian soldiers faster than Russia can replace them, until they run out of young men. His drone units are currently doing this, he claims. They supposedly lose one Ukrainian drone unit soldier per 400 Russians dead. Material cost per dead Russian soldier is about US$850. He looks at attrition war as an ROI problem.
His risk management strategy is to have redundant everything, so there's no single point of failure. Lots of small drones. Distributed operators. Many small factories. Varied command and control systems. He makes the point that they use lots of different kinds of drones - some fast with wings, some slow with rotors, some that run on treads on the ground. There's no "best drone". Using multiple types in a coordinated way makes it hard for the enemy to counter attacks. No one defense will stop all the drones.
Ukraine built 4,000,000 drones in 2025. This year, more. The Ukrainian military needs a new generation of drones about every three months, as the opposition changes tactics. They view most US drones as obsolete, because the product development and life cycle is far too long. (See "OODA loop" for the concept.)
This is a big problem for the US military's very slow development process. Development of the F-35 started over 30 years ago.
[1] https://www.economist.com/europe/2026/03/22/ukraines-top-dro...
How do they tell? Because they're awful? Some of them might be good. And many, if not most, songs created by unknown artists are terrible.
What exactly in the process doesn't make sense?
Unless the comment has been edited, it does make sense (other than the fact it's intention might just be an ad for BookShelves reader):
- Use Calibre to cross-convert books.
- Leverage public domain ebook catalogs: Standard Ebooks, Internet Archive, Gutenberg.
- For on-device reading BookShelves app might be an option, with no cloud lock-in.
>I'm at the stage where sometimes I make something that sounds good (to me) but I know it requires work (in the "not fun" sense) to finish it and even then, it will likely never be appreciated by anyone but myself.
That's true of 99% of very polished finished work too. Amazing bands and artists in Spotify with sub 1000 streams/month.
>None of these problems are "new", but I feel like AI is making this question of "why do it" or "what is worth doing" even more urgent. Kind of wondering how others are affected by all this, if at all.
Absolutely. One big concern is that even if you do it and you're proud of it, many will think it's AI anyway.
Plus the over-inflation of AI generated shit. It could all die in a fire.
Vulkan is not Apple.
Metal is Apple's API.
>The same people that shout "Capitalism sucks, free us from our labor" are the exact same types that hate AI. The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.
No, AI will only free us from our jobs, while still keeping the need to find money to feed ourselves.
"When harnessed correctly" is exactly what wont happen, and exactly what all the structural and economic forces around AI ensure it wont happen.
We are on the outside, for the types of customers we serve it is MS SQL, Oracle, DB2, or some SaaS product that you can only access via GraphQL.
Very seldom I use something like Postegres, last time was in 2018.