What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
Compared to other drugs, alcohol is metabolically significant, like you can easily drink enough alcohol to affect your energy balance directly.
That said, my experience is that my weight swings about 3kg if I am using cannabis or not using cannabis but that's got something to do with how it changes my appetite.
Ah, the terrible agony of realizing the hoi polloi aren't entirely wrong about everything after all.
How are people supposed to act while waiting in line to check out?
You can use modern off-the-shelf models for those types of tasks, however a smaller-but-bespoke model will usually be more cost-efficient if used at scale.
The Go runtime provides its internal scheduler for "goroutines" (roughly "green threads", threads which are managed in-process and not by the OS for lower overhead), a garbage collector, and so on, which in turn are tightly integrated into the language.
One of the things Marx got right was to analyze society in terms of economic interests, and realizing that there was an intermediate class whose interests are more linked to that of the upper class than to the interests of the masses.
In feudal times, kings and barons needed lesser gentry to carry out their plans. "Billionaires" likewise need armies of professionals to run their organizations. This group "works for a living," but that's a superficial distinction. In reality, those peoples' financial interests are strongly linked to the interests of the billionaires. There's a lot of people who "work for a living" that sent their kids to college by helping paper up deals that moved factories and jobs to China. The fact that those lawyers and accountants and bankers also "work for a living" was only a superficial similarity they shared with the factory workers whose jobs were outsourced. What dominated was the material interest--one group had skills that enabled them to benefit from globalization. And another group lacked those skills and suffered from globalization. You'll see the same from AI.
Your "class solidarity" has had the opposite effect of what you probably intend. The more the upper middle class started seeing themselves as "part of the 99%," the more they diluted the mission of organizations that advocate for working class interests.
This subthread has people using "improve" to mean "increase" and "improve" to mean "decrease". Maybe you guys should stop talking past each other and converge on replacement rate?
Up until very recently, and especially in Africa, huge amounts of effort went into reducing birth rate to avoid locally-Malthusian situations with high child death rates and occasional famines.
> why should I spend time developing an employable skill just to raise >2.3 children and not thrive in my career?
This contains the answer: we aren’t paying enough.
Kids used to confer private, excludable benefit through their labour. Without child labour, their economic value is no longer exclusive to their parents. This transforms children, economically, from a private good to a common resource. Our low birth rates are a tragedy of a commons. A known problem with a known solution.
If we want a higher birth rate, we should have a massive child tax credit. One that can rival the rising cost and opportunity cost of childrearing.
You think they’re leaving any of that behind?
Some sort of memories-style file for topics so it can generate even more cross-references and a sort of shared world. Not for total coherence; the natural contradictions the LLM is going to generate anyhow is just part of the charm. But still sliding the scale a bit more in the direction of coherence that the "use this page's context when generating the clicked link" already leans would add some more appeal, I think.
For instance, you can build memories around times, topics, and people, so maybe specific individuals will be quoted multiple times over the course of the wiki and could build up a specific identity within the shared world.
Also... I don't know how you are thinking of this internally, but other than the issues of token spend and the $$$ involved, I would say, don't even blink at simply nuking the site at some point and starting over once you have some moderation stuff in place and other limits. Don't put it on yourself to filter out what garbage has already been generated. It's all transient content. It lazily regenerates itself anyhow. It's not precious, except for, like I said, the aforementioned token costs, which I don't deny. You can probably put some other tweaks in to the prompt to your liking at that point too.
>> The first is when novices in a field are able to produce work that resembles what their seniors produce [...]. > The second is when people generate artifacts in disciplines they were never trained in.
This phrasing made me think of Baudrillard: https://en.wikipedia.org/wiki/Simulacra_and_Simulation , in particular "Simulacra are copies that depict things that either had no original, or that no longer have an original".
The AI produces something that is statistically similar to what it was asked for. A copy, through the weights, of some text selected from all the text it was trained on. A simulacra of good work.
"But I suspect that the ProgramBench authors are either under-eliciting the AIs, or their tasks are unfair/impossible given the constraints, or both."
I'd go with "impossible":
"Given a gold (reference) executable and its usage documentation, a task worker is asked to write source code and a build script that constructs a candidate executable which should reproduce the behavior of the gold executable."
The test cases are built from an AI doing an examination of the source code and producing test cases, and later text also confirms that the AI during the production phase can't read the original executable so it can't reverse engineer it directly, so the test cases are being drawn from a situation where the tester has vastly more knowledge of the program than the implenter.
That is a losing scenario for anyone, be they human, modern AI, or even some hypothetical perfect programmer. Take ffmpeg as an extreme example. The documentation does not even remotely specify the program. Entire codecs can be missed at a stroke, and each of those codecs is itself a rich set of features that may or may not be used in a given input or output file, but the final tests can freely draw from any of those things. And trying to implement a codec from just some input and output would strain anyone, especially when the input is all but certain to not be sufficiently broad to make the determination for sure.
That sort of issue extends all the way down to even some tiny command-line programs I've written myself. The end-user documentation is never a specification. That's not what end-user documentation is. And even if you did hand the AI all relevant specifications you'd still get an implementation of the specification, but anyone who has ever implemented a non-trivial specification into real-world situations can tell you all about how even the spec is never enough.
I think that's an absolutely ridiculous test. If you handed to me as a human I would simply refuse because I'd tell you straight up front that it is plainly obvious I'm going to utterly and completely fail, so why even bother with the time to try?
Which groups are those?
"Among voters under 26 years old, the only race-by-gender group to have majority support for Harris are women of color."
"Our best estimate is that immigrant voters swung from a Biden+27 voting bloc in 2020 to a Trump+1 group in 2024. This is not a small group either - naturalized citizens make up around 10% of the electorate."
(Source Blue Rose Research, a top data analysis firm hired by Democrats: https://data.blueroseresearch.org/hubfs/2024%20Blue%20Rose%2.... Pages 7 and 9.)
You're overlooking that the geriatric people are carrying on the traditional liberal/conservative debate on both sides of the aisle. My observation living in an area that has a mix of 70+ WASPs and younger black and hispanic people is that the WASPs are the ones who are by far the most incensed by Trump. They don't just dislike his policies. They hate the way he talks.
> Therefore, why not assume every trade is insider dealing, unless proven otherwise?
This kills the crab.
(investors are driven out of markets when it is obvious that they are being cheated)
> The truth is that any empire needs to pick off rivals and rob them, in order to keep the empire going.
This also kills the crab. (And most of us along the way: we're already in a limited kind of world war, the sort of thing that has a history of escalating)
Who else here is old enough to remember when Martha Stewart got jailed for insider trading?
For a while my car buying strat was to buy a new Asian car and run it for 130,000 or more miles. In the pandemic though we had to get my son a car in a hurry so he could drive to work and new and gently used cars were hard to find so we discovered you can always get a pretty cool old car for that kind of money with the expectation that pretty soon you're going to spend about the purchase price in repairs; in our case it was just fine because once he had the job he had the money to pay for repairs himself and it is still a lot less than the payments on a new car.
Well not quite, unfortunely Rust still has a bit to catch up with 1989, it isn't only the Turbo Vision inspired IDE.
https://ia801901.us.archive.org/5/items/TurboPascal55/Antiqu...
> Fast! Compiles 34, 000 lines of code per minute
https://archive.org/details/bitsavers_borlandtur5.5Brochure1...
Measured on a a IBM PS/2 Model 60, meaning an Intel 80286 running at 10 MHz with 640 KB for MS-DOS, up to 8 MB depending on extenders and HMA configurations.
https://en.wikipedia.org/wiki/IBM_PS/2_Model_60
And if you feel using the language complexity excuse for 2026 hardware, see OCaml, Delphi, D, or C# AOT.
Would you happen to have a link to that?
I think there's a reasonable argument that the most stable Linux gaming API surface is actually Proton.
None of this is really going to change until we end up with a situation like the EA/Apple Store conflict: a major player unable to sell a game on Windows for some reason.
Perhaps the experts have decided that, for this specific instance, the thing we need to do is ad-hoc and throwaway, and is simply not worth paying the extra cost to make it tasteful.
Oh boy, I can relate to the sentiment of the article, it feels like how it has always been in enterprise consulting.
How Apple luster for services growth makes it indistinguishable from Android, while being more expensive.
> The Daily Mail has been telling us that the Yellowstone supervolcano is about to blow for nearly thirty years now at least.
That's the Daily Mail, though. They platformed Andrew Wakefield, a misrepresentation of science that has a massive body count.
A more serious question for Silicon Valley is the San Andreas fault.
Yes, the "correct" reaction to the ambiguous tiles is to hover a bit indecisively. You need to waste a certain minimum amount of time on the CAPTCHA. I've found that applying videogame reflexes and zapping all the tiles in a short period of time is a fail, even if they're the correct tiles.
>AI didn't take our jobs. Greed did.
Sure. But when it comes to coding, even greed couldn't do it without AI. At best it could outsource, still giving it to humans.
I think actually this competes with the old BerkeleyDB: https://en.wikipedia.org/wiki/Berkeley_DB - which I now see is no longer BSD-licensed, and in any case has been rendered almost extinct by SQLite. It was used for basic on-disk key-value store work.
>They both stem from the same LLM root and positioning them as significantly different is weird and unconvincing to me.
It's the difference between caring and not caring.
And everything turns to shit. Even when you pay premium for it.
But they keep the 4-speed transmission? For what purpose?
One who has a true mastery of programming should be able to write any program in any language, or at least see how to do so, because one thinks in terms more abstract than language-specific constructs yet is able to map them to any language.
Relatedly, here's TLS 1.3 in VB6: https://news.ycombinator.com/item?id=35882985
Win32.
The C standard library is definitely not part of Windows.
It is now with the Universal C runtime, introduced in Windows 10, which is ironically written in C++ with extern "C" { ... }
On non UNIX clones, including Windows, it has always been the role of commercial C compilers to provide the C standard library on top of the actual C APIs.
it would be remiss to not mention the most excellent ben-eater's 8bit-computer https://eater.net/8bit and ofcourse the nand-to-tetris book + resources (https://www.nand2tetris.org/)
Except those are the same people that will decide who is getting hired, and who gets layoff because of increasing productivity.
And no, this isn't playing what ifs.
I have seen it happening with offshoring, migration to cloud, serverless, SaaS and iPaaS products, and now AI powered automations via agents.
Less devops people, less backend devs , no translation team, no asset creation team,...
I have been layoff a few times, having to do competence transfer to offshoring teams, the quality of the output is something c suites don't care all.
Do you wanna bet what is behind Microslop, Apple Tahoe bugs and so forth?
Not quite that high. Around 15KHz. (15,734 Hz for NTSC, 15,625 Hz for PAL.)
Who else expected the "certain elements in the shot to be out of focus" link to lead to https://en.wikipedia.org/wiki/Bokeh ?
If you don't want any frame stacking, you'd need to use a dedicated camera instead of a smartphone, because a smartphone without HDR isn't viable.
I have an old Android with a 13MP camera (Sony IMX214, 1/3.06") that leaves HDR off by default. I haven't had a need to turn HDR on except if I'm trying to photograph something with regions of extreme contrast.
I don't think anyone says that really poor people are caused by the existence of really rich people. The argument, as I understand it, is that spreading the wealth of billionaires around would mean fewer really poor people.
Equity and fairness are at odds with each other.
It wasn't until fairly recently
By "recently" you mean Win95? MSVCRT.DLL has been there for at least that long.
"This is actually good news for the US."
This is 100% accurate. I've seen someone apologizing for being stepped on (accidentally, of course). It really does mean "we have, unfortunately and inadvertently, crossed paths and must now ward off the evil spirits by acknowledging this".
Yes? Weren't you? Did you think you were buying a token lottery, where you'd have a billion tokens one day and zero the next?
The statistical murder isn’t by the parents who fell for disinformation; they’re victims, to at least some extent.
The statistical murder is done by the people spreading that disinformation. Wakefield. RFK Jr. Alex Jones. etc.
I'd rather have no ID verification at all. Give them an inch and they'll take a mile.
I see claims like this all the time on HN. Where is this data supposed to be coming from? When I look it up on Google, I get data ultimately sourced from shady online IQ tests (which nonetheless purport to provide a monotonic ranking of every country in the world from China to Nauru, despite the fact that virtually none of these countries collect IQ scores from their populations).
I have no reason to doubt that China is modernizing and improving their gross aptitude for knowledge work! The directional point you're making may very well be valid! I'm just wondering how anyone could be quantifying it in terms of "average IQ", a metric that generally does not exist.
Technology is wonderful.
Physics still gets a say.
I wonder if some of this is because of the "prices are set on the margins" effect of markets. The price of anything is set by the folks who are actively transacting at any given time; if you're not buying and selling, your opinion doesn't matter.
Oftentimes, near a market top, the people who are value investors and actually care about price end up selling off all their holdings. But because they have already sold, and are not buying, they drop out of the market entirely. Prices get set by the people who are price insensitive, because they're the only ones willing to participate. As a result, you often get the "blow off top" right before a market crash, where the stock market moves sharply upwards even though fundamentals say it should crash. All the folks who believe it will crash have already left and no longer participate in price-setting.
Yes, as retaliation of a US/Israel invasion that is against international law.
The only reason Iran is playing that sole card they hold is their two core enemies launched a war of aggression.
> Anthropic, who argue AI is a fundamentally different technology
They’re arguing it’s a service. I think Aramark could refuse to contract to provide employees to the U.S. military for a campaign on Chicago.
I've been offered a Book of Shadows for cryin' out loud.
The graphs are in the video. As stated at the top:
> [Note that this article is a transcript of the video embedded above.]
I've tried to like sports, in order to fit in.
I just gave up on it.
Do you have an alternate solution? When we hear so many stories from HN'ers of their websites being hammered by out-of-control crawling and fetching and new levels of AI slop spam?
This is something site owners choose to implement or not. They're the ones paying the extra hosting fees to handle potentially unwanted traffic, and dealing with spam that traditional CAPTCHA's are no longer effective against. Google's not forcing this on anyone else.
There was a bug where scanning took too long with the thousands of articles in there, but I just fixed it.
You can also just type a random URL and visit it, it'll generate an article. That's what I did before I fixed the search issue, and I usually just do that to avoid the search route.
Well, in large part by murdering those that were actually trying to self-determinate...
It depends on your use case.
If you are a B2C app, you are probably more concerned about:
- social providers (Apple and Google being the big ones, but others could play a role--FB or Tiktok for example)
- easy registration (but not too easy, you want to avoid bot spam)
- self-service account management (updating profile fields, consents [CCPA, GDPR, others], resetting passwords
- single sign-on between your apps (if you have multiple)
- language support (for your backend, and mobile/web front end)
- cost
- possibly MFA, possibly passkeys
The OP has an amusing side point - LLMs have automated sucking up to management. There is a large market for that.
His main point, though, is this:
I have a colleague ... who spent two months earlier this year building a system that should have been designed by someone with formal training in data architecture. He used the tools well, by the standards by which use of the tools is currently measured. He produced a great deal of code, a great deal of documentation, a great deal of what looked, to anyone who did not know what to look for, like progress. He could not, when asked, explain how any of it actually worked. The work was wrong from the first day. The schemas, and more importantly the objectives, were wrong in a way that would have been obvious to anyone with two years in the field.
I've been reading many rants like that lately. If they came with examples, they would be more helpful. The author does not elaborate on "the schemas, and more importantly the objectives, were wrong". The LLM's schema vs. a "good" schema should have been in the next paragraph. That would change the article from a rant to a bug report. We don't know what went wrong here.
It's not clear whether the trouble is that the schema can't represent the business problem, or that the database performance is terrible because the schema is inefficient. If you have the schema and the objectives, that's close to a specification. Given a specification, LLMs can potentially do a decent job. If the LLM generates the spec itself, then it needs a lot of context which it probably doesn't have.
This isn't necessarily an LLM problem. Large teams producing in-house business process systems tend to fall into the same hole. This is almost the classic way large in-house systems fail.
Exactly what we see.
And the worst offenders are those insisting this isn't the case.
>The goal was to test our structured-generation algorithms and their open-source counterparts, replacing the naive “does it accept this string?” with something closer to the real problem: “does it produce the right token distribution?” The experiment kept coming up in conversation, then returning to the roadmap. Last month, I spent half an hour explaining the method to Codex. A few hours later, it had produced a working first version. That’s all it took.
Proving that the bottleneck, was, in fact, the code. It's just that the AI wrote it now.
The person who thought "the bottleneck wasn't the code" already had the goal discussed and coherent in their mind.
Code as bottleneck doesn't have to mean "I wanted this feature but it took me many months to finally code it". It is also "I wanted this feature for 2 years, but the friction in sitting down to put it in code and spending 5-10 days on it, etc, put me off".
If the code wasn't the bottleneck, they could just sit and write it themlseves. But, they didn't want to go through the effort and time spent of coding it themselves, as they knew it wouldn't take as little as with the LLM.
(And even when you don't have a clear final spec in mind, the exploratory code+check+discard+retry-new-design, is also faster with an LLM, precisely because the "code" part is).
In other words, the code was the bottleneck.
The post appears AI-generated itself, just with instructions to avoid obvious constructions, which still makes for tedious reading.
That's not a caveat, that's a 100% disqualifier.
So, in the true spirit of the HN dismissive comment, but this time I think it really does have its place.
Ultrasonic is DOA, sorry, but that just won't do. It's already a nuisance to have all these switching supplies that mess up your hearing (and some can be surprisingly loud), using it for power delivery is really a non-starter.
There was a company that planned on using ultrasonic for power delivery to smart phones, every engineer with some ultrasonic experience said it wasn't going to work and they just kept going until they - predictably - went out of business.
https://en.wikipedia.org/wiki/SonicEnergy (formerly Ubeam).
Just wishing it exists does not mean it is possible or practical, that's right up there with Theranos (and I think Theranos actually had a better chance of working even though that chance was extremely slim).
There are interesting start-ups around the theme of energy scavenging though, that's a far more realistic but still extremely challenging proposition.
I was driving around the other day with my wife and I said "Hey, you should see how i can order from the McDonalds app and the food is ready when you show up" and in the end she was appalled with what a Fillet-o-Fish costs for how much food you get.
To "simply run out of fossil fuels" is like that potentiometer you mention, it isn't like you run out all at once but you run out of the cheap ones first and it gets more expensive.
I remember reading
https://www.amazon.com/Hubberts-Peak-Impending-Shortage-Revi...
in the early 2000s which was about the coming peak of conventional oil production and it turned out to be wrong in the sense that we knew in the 1970s that there were huge amounts of oil and gas in tight formations that we didn't know how to exploit. People were trying to figure out how to do that economically and had their breakthrough around the time that book came out so now you drive around some parts of Pennsylvania and boy do you see a lot of natural gas infrastructure.
I remember being in my hippie phase in the late 1990s and having a conversation with a roughneck on the Ithaca Commons who was telling me that the oil industry had a lot of technology that was going to lift the supply constraints that I was concerned about... he didn't tell me all the details but looking back now I'm pretty sure he knew about developments in hydrofracking and might even have been personally involved with them.
This was a podcast, not a pre-scripted talk. I suggest listening to the audio version - it makes it more clear that this was thinking out loud, not carefully considering every word.
... kinda funny that I was working on stuff about 10 years ahead of its time and was having the biggest argument with my cofounder and other people that "subscription plans that aren't cost-based (or at least cost-aware) won't work for intelligent applications" and it seems the world has caught up.
So the argument is that the brand will understand the problem domain well enough to define it so an outsourced software factory can build it. That factory will also run it and maintain it.
I've seen this in the consulting world with long term relationships between a brand and a consulting company, where the consulting company is the technology partner.
I don't think there's anything to stop that from happening with agents; it's just a different means of producing software.
I recall in the late 1990s that physical synthesis was thought to possibly be the next big thing, that it might take over synthesis of musical instruments entirely from the options of wavetables and FM synth at the time. It didn't, but my point here is that is where it was, a prominent alternative that everyone in the relevant fields was aware of and many people tried to make work, not a recent invention and not just an obscure academic pursuit.
> First, we’re doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and seat-based Enterprise plans.
The fine-print-omission appears to be that weekly limits are not doubled. The progressive 5-hour rate limit shrinking was indeed an efficiency blocker that finally convinced me to cancel, but being only able to get 4 full sessions a week as opposed to 8 doesn't compell me to resubscribe.
Until the current management retires, as it usually goes.
Never worked with VAX/VMS, however have spent enough time reading through its manuals.
Systems programming with compiled BASIC, its Extended Pascal version, the API surface that somehow we can find traces where Windows NT got its design inspiration from, really leaves some space for what ifs, in the operating systems adoption evolution.
Aren't you supposed to return a 0 status code when "yea done!" and some other status code when it wasn't done?
The original comment has conflated every ill into "have to", combined with political fatalism. Not unreasonable given the way things have turned out, but yes it's not inevitable either. It's just the direction of travel that the majority chose.
Certain "have to" are imposed by the physical world. The world will have to use less oil in 2026 than in 2025, because production has been so heavily impacted by the war. What happens beyond that .. well, only a fairly small number of people get to make that decision. Next US presidential election is in 2028.
Same here, whatever you find from me on github, it is GPL, mostly there to show to HR folks that require code samples, and mostly to play out with some feature I wanted to try out in some language, e.g. C++20 modules, WinRT and so on.
> made-up numbers
It's important to note that in most jurisdictions you can't actually do this legally? Like, you may be able to get away with it, but it is actually illegal to sell financial services by misrepresentation?
I think both Elsevier and the people that appropriate IP for training commercially deployed AIs purpose without the consent of the author(s) should be legal.
I don't think that's the reason. Modern supercars have so much power that the average person that can afford them is going to wreck the drivetrain in a very short while if they have to manage all that power themselves. Automatic gearboxes are far more forgiving. You see the same with Porsches that have manual gearboxes, if you read out the ECU you'll see them overrev many times more than with the autos, if at all (in fact I don't recall seeing an auto that had overrevved).
Notably, this applies to the "product tour" a lot of products want to give you when they've added new features and I find this particularly obnoxious, especially with Adobe tools.
Like a lot of times when I am using Lightroom I just shot 3000 photos at a sports game and feel under the gun to select a few out and develop them or I am using Acrobat to handle some stressful paperwork which is late. I close 100s if not 1000s of modal dialogs that never should have been opened every day and just don't need another one.
It's bad from the viewpoint of Adobe because I wind up dismissing these messages out of hand.
Adobe wants me to see the value I am getting from my Creative Cloud subscription, like I am likely to keep paying for it if I enjoy more features in more of the products. Like lately I discovered Adobe Fonts is great: like I find looking for free fonts is the most depressing thing in web development and graphic design, I can spend hours looking at fonts and making comps and thinking "I can't stand that 'k'". Adobe Fonts on the other hand has quality fonts that are well organized and often I can put in 15 minutes and walk out with something that works so well with my brand that if I want to set stuff in that font with Pillow of course I am going to plunk down $90 and buy it -- I don't feel bad at all that the fonts are tied to Adobe tools and my CC subscription.
In terms of execution you just expect something like this to be crap. The integration of Adobe Fonts into Photoshop is broken: it can lock Photoshop hard and force you to kill the process. On the other hand it works great with Illustrator. Marketing-driven development always seems to have a lack of empathy and attention to quality that in the end is self-defeating.
---
Lately I've gotten hooked on the mobile games Arknights which has extensive lore, too many game modes to count and very complex mechanics and hundreds of characters who have unique abilities (e.g. even the "trash" 3-star characters usually have something special about them and are designed to make teams that punch above their weight)
Arknights gamifies learning the game and engaging with the mechanics by offering daily, weekly, and campaign rewards for taking actions, completing levels, developing characters, etc. This is part of a number of mechanisms that gradually get you up to speed on the game mechanics, reveal the world, etc. These kind of mechanisms, used gently, could work for applications software.
But I think timing is everything. One of the most annoying people in downtown Ithaca is a panhandler who comes up from behind and starts demanding money or the bandanna off your head, he doesn't bother to make eye contact, he doesn't look to see if you're receptive or for a moment when you might be open, he just makes demands and gets angry when you deny or ignore him. I give money to panhandlers quite often if they engage me person-to-person and are agreeable but this guy is like so much application software today.
Probably CGNAT, "Carrier-Grade NAT": https://en.wikipedia.org/wiki/Carrier-grade_NAT
Huge, huge numbers of machines behind a single external IP mean that your internet access carries all their reputation by proxy. Since switching off Comcast to a smaller fiber company that uses CGNAT I've seen somewhat more Cloudflare challenges.
3rd edition (2025) free to read online
jupyter notebooks: https://github.com/fchollet/deep-learning-with-python-notebo...
> useless policy season
What does this mean?
> these companies
I think the conclusion the market is rapidly and correctly reaching is we aren’t in an AI bubble, we’re in an OpenAI bubble.
Google, Amazon and Anthropic look likely to see ROI on their capital investments because they’ve made them halfway reluctantly. Microsoft is up in the air. Not sure what Meta is doing. And with the benefit of hindsight, OpenAI used capex as a marketing strategy with investors (while Sam Altman materially lied about his compensation and somehow looped Paul Graham and Jessica Livingston, founder of The Information, into his racket).
A long time ago I did that to make Canonical's Launchpad easier to read - mostly making tables look nicer and so on. I was really nice. I saw similar initiatives at Workday as well - browser plugins that added extra functionality to the development instances of the application.
3 DOF per leg, so it needs 12 motors and controllers. Getting that under $1000 is nice.
Here's the US$18 motor: [1] Those things are getting really cheap. He did have to rewind it, though, for more turns with thinner wire. The manufacturer mentions that you can order with "custom Kv", which means you might be able to get a different winding from the factory if you order a reasonable quantity. Especially if you tell them that makes them "robot motors".
Motor overheating might be a problem. The dog, just standing, has its motors stalled under load, converting power to heat. Drones don't do that. Temperature feedback would help if this thing has to operate for extended periods. Remember yesterday's article on humanoid robots and their cooling problems.
The motor controller is nice too, and cheap at $49. Needed fixes to the firmware, but that's not surprising at the price. High performance motor controllers used to cost about $1000.
Repurposed drone technology has done wonders for legged robots. We're not quite at the point where limb drive hardware is off the shelf, but it's way better than it used to be.
[1] https://www.xntyi.com/tyi-5008-kv335/kv400-high-speed-brushl...
> But these LLMs are like Happy Gilmore. They get to the green in one shot then they orbit the hole with an extremely dubious short game.
Except that he got good at his short game by the end. LLMs will get there sooner than we think.
The transfer rates limit how much each chip can be active at any given time, so a heat-aware writing allocator can pick the least active blocks for the next writes and distribute the heat accordingly. Even if it’s not heat-aware, the tendency will be that the writes will be distributed over as many chips as there are, and so will be the heat generated.
Now, I would LOVE to see this much SLC flash on a direct to bus attachment setting.
QLC NAND
The datasheet shows 3GB/s sequential write, which for 245.76TB means writing the whole drive takes around 22h45m. Odd that the endurance is specified as "1.0 SDWPD", which is almost meaningless since the drive takes roughly that long to write at full speed.
At scale, 1.9 times more energy is required for an HDD deployment
...but those HDDs are going to hold data for far more than twice as long. It's especially infuriating to see such secrecy and vagueness around the real endurance/retention characteristics for SSDs as expensive as these.
On the other hand, 60TB of SLC for the same price would probably be a great deal.
Trump also fired the Immigration Detention Ombudsman.[1]
[1] https://www.independent.co.uk/news/world/americas/us-politic...
Oh! Dear Lord. I still want to hear my Indian friends speak Indian to me during Support Calls. These days, I’m hearing American accents trying to calm me down over my complaints on that excess masala in the idli-dosa-pav-bhaji butteerr-chicken combo in the El Camino Eatery in the outskirts of Jhalandar.
You just described 95% of the parts of all software, especially in this era.
Yes, that's the problem
> the LLMs will ship code the LLMs understand, and whether any human specifically understands any particular part will mostly not matter.
I find this particularly funny. There were more than a couple Star Trek Episodes where some alien planet depends on some advanced AI or other technology that they no longer understand, and it turns out the AI is actually slowly killing them, making them sterile, etc. (e.g. https://en.wikipedia.org/wiki/When_the_Bough_Breaks_(Star_Tr... )
Sure, Star Trek is fiction, but "humans rely on a technology that they forget how to make" is a pretty recurrent theme in human history. The FOGBANK saga was pretty recent: https://en.wikipedia.org/wiki/Fogbank
It just amazes me that people think "Sure, this AI generated code is kinda broken now, but all we need is just more AI code to fix it at some unknowable point in the future because humans won't be able to understand it!"
Not too long ago, someone submitted an AI demo to HN that resulted in a 3.1GB download upon visiting the page: https://news.ycombinator.com/item?id=47823460
It reminds me of the "dialup warnings" common 2 decades ago on huge pages (often containing many images). Yes, bandwidth and storage has gotten cheaper, but the unwanted waste should still be called out. I'm not even anti-AI, having waited several hours recently to get some local models to experiment with, but that's because I wanted to and made the decision to use that bandwidth.
It didn't originally say that. They added the clarification just a few minutes ago. The guidelines ask you not to ask people these kinds of questions, for what it's worth.
It would be useful if this site included a human-readable explanation of what terms like "Disposition: Diversion" compared to "Disposition: 871PC/No Sufficient Cause" meant.
Or a clear definition of what the question "Which released charge sounds worse?" means.
I don't know why but this chain amuses me: RaTeX -> KaTeX -> LaTeX.
I guess it shows how everyone loves but hates LaTeX and is always trying to bolt on that one last thing that will make it good.