What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
You make it sound like rocket science or huge bother.
If you're a hacker/dev/tech nerd, that's trivial. You do similar things twice before breakfast without thinking about it.
I don't see why not, it's all contributing to acceleration, although it's going to be rough on the tyres.
I've observed this as well, that software in a hardware org is a bit of a second class citizen. The absolute worst case is AMD leaving a trillion dollars on the table because they can't compete with CUDA software APIs, but lots of places are like this.
But software in general - well, in America - got pulled up into the stratosphere by FAANG money. I feel that should have had more of an effect than it did on non-software orgs.
What benefit would people gain from the reports? Average rate of success/time is interesting, but I'm not sure what you'd do with this information other than a bit of local press discourse. I suppose it's nicely timed for the council elections?
Ah, but that may well be because of your scope probe's leads! The sharper the edge the more likely that will happen. That's what those shitty little springs are for that come with your scope probe: you disconnect the ground wire and put that spring on the naked scope probe pin around the ground collar. Then where you want to measure you use the pin to go to the signal and the little spring to reach the nearest ground. Presto: clean signal (or at least, much cleaner). Also, make sure to tune your probe (that's what the little plastic screwdriver with metal tip is for, there is a small trimmer in the probe you can reach through a hole and that is critical at high frequencies) and avoid probes with switchable 1/10 like the plague, over time the switches go lame and then you'll be tracking all kinds of weird gremlins.
There's this myth (that came to you in pop culture) that you end up sounding like Tom Waits.
In reality, some phlegm aside, their voice is still the same in any way that matters.
If you knew people who didn't smoke and started (not uncommon in the 80s and 90s, quite a few people I know started smoking in university, or after the stress of a first job, some even later), and also the inverse, you can trivially hear it for yourself.
The question is, when it screws up, who gets blamed, and who pays. If it's the customer, and you can afford to lose a small fraction of customers, it may be worth it. It's just another form of crappy customer service. If it's internal, and it's all output, no input, and the internal organization doesn't really need that info that badly, that might work out.
But give it the authority to do something and there's real trouble.
He's responsible for the great success of Silicon Graphics, Inc. (SGI)
> and held leadership roles with the Aspen Institute, Vanguard Group, Silicon Graphics, Inc. (SGI), and Stanford University.
It's not oscillating at 50MHz. Look at the waveform, with the big spike in the middle. That's a spike at some lower frequency, wider than the screen, followed by ringing. Need to zoom out the time base some more to see the period of the big spikes. It's no higher than 4 MHZ (the screen is 12 units wide) and possibly much lower. (Assuming that M:20ns on the display means 20ns/grid division. The manual is a bit hazy on that part of the UI.)[1]
The power regulator IC mentioned is normally run at 500KHz. There's a reasonable chance that this is the power regulator spike not being damped out. Easy enough to check with a scope handy.
[1] https://fotronic.asset.akeneo.cloud/pdfs/media/owon_hds242s_...
The fifth grade in me salutes the fifth grade in you.
It seems to be using more info from pre-1900 rather than 1930. It doesn't know about the Great Depression (1929-WWII). It knows about WWI if you ask it specifically, but talks about European politics as if it's 1900 or so.
On technology, it knows who Edison is, at roughly the Wikipedia level, but credits him with a 125MPH car. About a dial telephone, it is confident and totally confused. It has the traction voltage for the London Underground right. But then it goes on with "Thus, if the current be strong enough to force its way through a resistance of 100 ohms, it is said to have a pressure of 100 volts; and, if it can overcome 1,000 ohms, its pressure is 1,000 volts." Which is totally wrong.
There's a general pattern. The first sentence or two has info you might get from Google. Then it riffs on that, drifting off into plausible nonsense.
Don't ask this thing questions to which you do not know the answer. You will pollute your brain.
This signifies that each vertical dotted line is 20ns apart, so the ripple you see has a frequency of something like 50MHz.
Unless you have a 50MHz buck converter (which would be very exotic --- the fastest common ones are around 1/10th that), that looks more like something may be inadvertently oscillating and/or you're picking up strong RF noise from possibly something in...
https://en.wikipedia.org/wiki/6-meter_band#Radio_control_hob...
And "leared" -- the (unintentional?) pun made me click.
What "walled garden"? The Mac-only apps aside, what's that that you couldn't get on Windows (and most even on Linux), either the same thing, or a zero-switch-cost subscription (it's not like you need to rebuy something to go from Music to Spotify for exampe).
iCloud? You can use Google Drive or Dropbox or whatever MS calls theirs. Apple Music? Pretty sure it plays at both.
Most major apps are cross platform (Adobe, Microsoft and such), or Electron based.
Syncing with your iPhone? You can do that from Windows and Linux as well. Airpods? Work with Android and Windows too.
And so on.
> the idea that glyphosate is part of the technology stack of GM crops
Is this true? Can't we we give in on glyphosphate without losing GMOs?
> That's not healthy at all
I do two weeks dry a quarter (versus dry January, which became silly once I moved to a ski town). My otherwise non-existent sweet tooth revs into first gear in the second week.
"Pre-IPO instruments trading onchain, backed 1:1 by SPV exposure on Jupiter, are providing a real-time proxy for the company’s implied IPO valuation."
Correction: crypto investors are bidding Anthropic up to $1tn.
Institutional markets are closer to $800 to 900bn. So the crypto investors are directionally correct, relative to last valuation and OpenAI, but of course paying top tick.
100% agree that there are more and less valuable meetings. Agendas and todo checkins are signals of worthwhile meetings. And having a meeting end or change is a good sign too.
No unencumbered epub, no sale.
I don't know, ELRS/LoRa is pretty amazing. I don't know what kind of jamming happens there, but with a big (and tall) enough antenna, you should be able to get pretty far.
What makes "organic software" better than vegetarianism?
This is off topic, but is it legal for websites to ask me to either accept tracking or pay? I thought the GDPR made tracking truly optional.
Whoa, Alec Radford is on the list of authors! He was instrumental in building the original GPT models at OpenAI.
I remain hopeful that some day someone will train an LLM which is tolerable to people who take this stance (which I respect, much like I respect food vegetarians despite not being one myself).
I've been tracking models trained entirely on out-of-copyright data, for example. I've not yet seen one of those which appears generally useful and didn't chuck in a scrape of the web or get fine-tuned on examples generated by a non-vegetarian model.
Andrej Karpathy can train a GPT-2 class model for less than $80 now, so at least the environmental cost of training may drop to a point that it's acceptable to LLM vegetarians: https://twitter.com/karpathy/status/2017703360393318587
Why do I care? This post is a great example. If you're a professor of computer science I really want you to be able to tinker with this fascinating class of models without violating your principles.
UPDATE: Huh, speaking of potentially vegetarian models, I just saw https://talkie-lm.com/introducing-talkie on the HN homepage https://news.ycombinator.com/item?id=47927903
I've explored I different out-of-copyright trained model Mr Chatterbox before but found it to have been mildly corrupted through the help of synthetic conversation pairs from Haiku and GPT-4o-mini - https://simonwillison.net/2026/Mar/30/mr-chatterbox/
Talkie isn't entirely pure either though: "Finally, we did another round of supervised fine-tuning, this time on rejection-sampled multi-turn synthetic chats between Claude Opus 4.6 and talkie, to smooth out persistent rough edges in its conversational abilities."
"I do not and will not use LLMs, in any form, for any purpose. Although LLMs are fascinating from a purely technical perspective, I refuse to participate in or contribute to such systems that are built on massive exploitation of human labor and make profligate use of scarce resources. I also don't think they are actually very good for a lot of the applications people seem excited about. Even in cases where LLMs are technically good at a task, that does not necessarily mean their use for that task contributes positively to human flourishing.
A good way to describe myself is as a generative AI vegetarian. You can find a fuller explanation—and many, many links—at the above essay by Sean Boots, which I agree with almost 100%."
Came here to say the same thing.
Like, I'd be interested to see if where my boundaries between blue and cyan, or cyan and green, are compared to the rest of the population.
But there's a whole other color between blue and green! A color that is primary under the subtractive CMYK model.
And it's an even bigger difference than with orange, because while red and yellow are 60° apart on the color wheel so that orange is 30° from each, blue and green are a full 120° apart on the color wheel, with cyan being 60° from each. So it's actually even worse -- it's as bad/nonsensical as showing yellow and asking if yellow is red or green.
There is no support for DANE on the client side!
I never understood "forced classification" games like this (as an aside, it's also why I always hated Myers Briggs). Maybe it's because I'm somewhere on the spectrum, but it always seems like a dumb, false choice to me.
For example, when I saw the second color, "aqua" immediately popped into my mind. Aqua is literally defined as #00FFFF in RGB color space - no red, equal (max) parts blue and green. So it just felt like flipping a coin to me as it felt neither more blue nor more green.
If you're not colorblind, yes. More or less.
Not much sense for the evolutionary machinery to keep the whole backend the same, but diverge in the perception part.
Wouldn't it be great if public officials would say what they in fact mean the first time?
Another markup language that looks like RUNOFF from the 1970s. I used to use RUNOFF for my term papers.
Do you have a good source on that retrofit?
NaNs are a very underappreciated feature of IEEE-754 floating point. In the D programming language, floats get default initialized to NaN, not to 0.0.
double y = 0.0; // initialized to 0.0
double x; // initialized to NaN
The discussion routinely comes up as "why not default initialize to 0.0?" The reason is a routine mistake in programming is forgetting to initialize a variable. With a floating point 0.0, one may never realize that the floating point calculation results are wrong. But with NaN, the result of a floating point computation will be NaN, which is unlikely to go unnoticed.I don't know of any other programming language with this safety feature.
Also, the D `char` type is initialized to 0xFF, not 0, because Unicode says that 0xFF is an invalid character.
Surely there are no other lifestyle, supply chain, or medical system differences between the UK and Germany! Open and shut!
I'd love to see some credible reporting on the graveyard of blockchain projects.
So many obviously stupid ideas cropped up on the blockchain in 2021-2022. How many of those are still going concerns?
I guess the problem with blockchain stuff is that often there's no servers to shut down or other clear indication that a project has failed - presumably you can look at on-chain data to see if people have stopped trading various backing tokens, but does trade ever clearly stop or are there always bots exchanging tokens back and forth?
'And how do you know this is the case?' : 'Receipts, mostly.'.
There is an author comment that is invisible if you don't have showdead on in the thread below.
You'd think that Brin would know the difference between Soviet Russia and socialism. Money sure doesn't seem to make people smarter. Anyway, I really don't understand why he is trying to justify his choice in this awkward and clearly dumb way when there is a much more straightforward explanation that will do just fine.
It is because your cellphone is a proxy for you.
You can read that quote as "I fled totalitarianism with my family in 1979 and I know the massive benefits it brings for those of us at the top".
This quote from Matt Levine in 2023 feels relevant: https://www.bloomberg.com/opinion/articles/2023-11-20/who-co...
> And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.
> Who's else would they be?
Their employer? They may work at related company, and are required to say this.
Still is, nowadays the standard is Jakarta EE 11, alongside Microprofile, which Spring also uses parts of.
Like solder dendrites.[1] Same mechanism, different materials.
[1] https://pubs.rsc.org/en/content/articlehtml/2017/ra/c7ra0436...
I'm glad this article includes the only credible fix for the HTTP leak problems: CSP.
A useful thing I learned recently is that, while CSP headers are usually set using HTTP headers, you can also reliably set them directly in HTML - for example for HTML generated directly on a page where HTTP headers don't come into play:
<iframe sandbox="allow-scripts" srcdoc="
<meta http-equiv='Content-Security-Policy'
content='default-src none; script-src unsafe-inline; style-src unsafe-inline;'>
<!-- untrusted content here -->
"></iframe>
It feels like this shouldn't work, because JavaScript in the untrusted content could use the DOM to delete or alter that meta tag... but it turns out all modern browsers specifically lock that down, treating those CSP rules as permanent as soon as that meta tag has loaded before any malicious code has the chance to subvert them.I had Claude Code run some experiments to help demonstrate this a few weeks ago: https://github.com/simonw/research/tree/main/test-csp-iframe...
> there's a general principle rooted in Roman law
There goes my fucking morning :P
Windsurf made a similar change in March: https://docs.windsurf.com/windsurf/accounts/quota
> In March 2026, Windsurf replaced the credit-based system with a quota-based usage system. Instead of buying and spending credits, your plan now includes a daily and weekly usage allowance that refreshes automatically.
With hindsight, per-request pricing makes no sense at all if an agent can burn a widely varying amount of tokens satisfying that request. These pricing plans were designed before coding agents changed the dynamics of token usage.
You're presupposing there's a valid argument for the other side. The text of the fourth amendment clearly connects the scope of privacy to property rights:
"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."
Cell location data belongs to AT&T and Verizon, not the accused individual. As to such third-party data, there's a general principle rooted in Roman law that third parties can be compelled to provide documents in their possession to aid a court proceeding: https://commerciallore.com/2015/06/04/a-brief-history-of-sub... ("In an early incarnation of mandatory minimum sentencing there were only two offences that automatically attracted the death penalty, treason and failing to answer a subpoena. Subpoenas as a tool of justice were considered so important that failing to answer it was a most egregious violation of civic duty. A person accused of murder may or may not be guilty, but if a person refused to answer a subpoena then they were seen as denying Jupiter’s justice itself.").
Those principles were incorporated into what's called the third-party doctrine half a century ago: https://en.wikipedia.org/wiki/Third-party_doctrine. But by then it was already an ancient principle.
macOS has a Network location in the sidebar that will show other SMB devices discovered on the network.
SCOTUS has been this way for a while now.
They start with the desired decision and work backwards to justify it.
What's annoying is that it's obvious. In the case of GPT 5.5, if Copilot is going to charge 7.5x what GPT 5.4 costs while OpenAI themselves via the API/Codex only charges 2x of what GPT 5.4 costs, that will immediately raise an eyebrow.
> the focus has shifted to faster approvals with evidence for new methods and drugs coming down to one high quality trial, and removed stipulations for randomized control trials for ultra rare diseases
Could you share how RFK's policies helped bring this to market faster? (Not challenging you, by the way. Just need help connecting the dots.)
"...if you have an Apple silicon Mac and AFP support is dropped from macOS 27, that would leave you unable to upgrade without replacing your network storage."
How big is this market? I'm not saying vibe code a product, but...
Why would I do that when making things is so much fun?
That's a 16" (from the size of the speaker grille on each side of the keyboard), so even more claustrophobic.
> the truism that no railroad company became an airline
I don't know if that's a truism. A railroad company could have tried merging with an airline. That they didn't doesn't make it obvious that it wouldn't have worked. (Air-rail alliances started becoming a thing twenty years ago [1].)
> "We" in this sentence refers to both parties
Fair enough.
> "they" refers to OpenAI. Not a grammatical error
I'd say it is. It's a press release from OpenAI. The rest of the release uses the third-person "they" to refer to Microsoft. The LLM traded accuracy for a bad joke, which is someting I associate with LinkedIn speak.
The fundmaental problem might be the OpenAI press release is vague. (And changing. It's changed at least once since I first commented.)
To some in the Deaf community, being Deaf is like skin color or hair color or height or left handedness; a normal variation of humanity with its own culture. "Fixing" reads as genocide to them, and it's not entirely unwarranted.
> Very convenient to leave out Amazon in your back of the envelope test, who’s internal metrics were showing a path toward quasi-monopoly profits
Not in the 1990s. The American e-commerce industry was structurally unprofitable prior to the dot-com crash, an event Amazon (and eBay) responded to by fundamentally changing their businesses. Amazon bet on fulfillment. eBay bet on payments. Both represented a vertical integration that illustrates the point–the original model didn't work.
> There’s a difference between being too early vs being nonsense
When answering the question "do the investments make sense," not really. You're losing your money either way.
The American AI industry appears to have "viable economics for profit" without AGI. That doesn't guarantee anyone will earn them. But it's not a meaningless conclusion. (Though I'd personally frame it as a hypothesis I'm leaning towards.)
> Fading connections. If two friends go a full year without tapping phones, the link between them softens. Not a punishment — a gentle nudge that real friendships are kept alive in person, not online.
One of my very best friends lives in another country. We speak nearly every day, but I haven't seen them in person in over a year.
Another of my friends lives on the other side of the USA. We speak a few times a week, but I haven't seen them in person in about four years now. And that was only because their mom lived nearby. His mom moved, so it's unlikely we'll see each other except once a decade when we do our friends trip to Vegas.
I have other very close friends who I almost never see in person.
My point being, having to tap phones is cool and all but not a great measure of the strength of friendship.
> This looks like it uses Gemini Nano under the hood.
Yes; "With the Prompt API, you can send natural language requests to Gemini Nano in the browser."
A lot of people are referencing meditation. Ultimately that's not a terribly well-defined word. It may match some broad ones, but there's a lot of narrow ones that it wouldn't.
If staring at a wall helps then don't let me stop you but I've sometimes done something very similar by just sitting in a chair without any cell phone, book, electronic item, etc. until I'm very bored. Not like "gritting my teeth, come on we can do another 15 minutes let's goooooo" like an exercise push, but definitely waiting past the first couple of twitches of boredom until it's a constant. It's kind of an interesting way to start a vacation, really helps disconnect from work very quickly. It can be some hours, though.
I do find that this only happens for me if I'm "doing nothing". I see others suggesting exercise, or something else, and those are absolutely good in their own way. But they are not the same thing as just doing nothing. It's still trying to do something and "use the time productively".
The downside is that the family just sees a guy sitting there "doing nothing" and can find a dozen reasons to interrupt... it's hard to do this when there are any other people around, and while I'm not an absolutist about a plan that can be summed up as "sit until you can't" without much loss, the interruptions do very quickly diminish the utility. There's a huge difference between sitting uninterrupted for an hour, and sitting for 15 minutes, putting away the dishes, sitting for 15 minutes, getting up to help reach something, sitting for 15 minutes, explaining that yes you really are sitting there just doing nothing would you please just let me do that, and sitting for 15 minutes.
This particular thing doesn't match "meditation" to me, because I'm not even doing the minimal thing meditation involves; I'm not concentrating on breathing, not trying to "not think", not trying to do anything. If the mind races, let it race until it is done racing[1]. In this point in particular this certainly doesn't match a lot of specific meditation traditions. If the thought of doing something occurs to you, that meditation technique of letting it pass through you until it disappears can be useful.
If meditation is a deliberate attempt to slow down, or a deliberate attempt to concentrate on some particular thing, or a deliberate attempt to empty one's mind, it still has a deliberative goal. If you're willing to broaden the term to encompass not even having that much of a plan, then I have no objection. But this feels to me too low level to even justify the term meditation as most people use it. If you're "trying" to do anything at all, then this isn't really what I'm talking about here. I'm not saying this is "better" than meditation, I'm more saying I'm not sure this even rises to that level, as low as some of them may be. It's really just "rest", a concept our century and culture has largely lost track of.
(Of course the obvious semantic argument about "well are you trying to not try, hmmmmmm?" is there and you are free to debate that in your own head, because like I said, I'm not trying to be absolutist about this. This isn't a program I'm proposing so much as an experience report. You do whatever and call it whatever and argue about definitions as much as you like.)
[1]: If your mind literally never stops this may not work for you... that said, in the 21st century, are you sure your mind never stops racing if you just let it run itself to exhaustion? Have you ever tried? It could be some hours, plural. Again, I fully acknowledge that some people reading this can say "yes". I acknowledge the existence of great neurodiversity. But if you've never tried just letting it run itself to exhaustion you may be surprised what happens if you can find the time to let it.
It's by far the strongest sign of AI writing. I counted 14 instances of the word "not" in this piece, and now I'm wondering if ratio-of-nots-to-length might be a really cheap way to spot text that came out of an LLM.
I think the reason is grates so much is that it's such a cliché. Given their nature, it's not surprising that LLMs would turn to clichés so much.
> It is so easy to sit on and critique from the sidelines. Steve Jobs had a passion for product, and it showed - he pushed the teams to make things he approved of, and that was the measure. Tim Cook had a passion for growth, and as the article states, Apple's income now rival some GDPs.
Who cares that it's Tim Cook's "passion" unless you're an Apple investor?
At this point, AGI is either here, or perpetually two years away, depending on your definition.
This is the way forward, convenient automatic resource management, with improved type systems for low level coding.
In various forms, affine types, linear types, dependent types, effects, formal proofs.
The list is already rather long, D, Chapel, Swift, Linear Haskell, Ox, OCaml, Koka, Ada/SPARK, Mojo,...
You can put an OLED TV over a hole in the wall and it's cheaper than getting someone to fix the drywall.
When we pay premium, we expect premium.
Let me guess, anyone who thinks Apple products are a buggy mess is a geek, and everyone else is ordinary, No True Scotsman style?
Personally, I don't think the fact that the Apple keyboard is unusable is a "geek" thing.
Because many aren't software engineers, they are brick layers.
To be comparable, they would have to go through the same university degree and professional certification, instead of doing a JavaScript training and call themselves software engineers instead of coders.
They are getting the blueprints from architects and senior devs, and putting those bricks into place, and carrying buckets.
10 Kg spools is murder on your extruder gears, I would not recommend going above 5.
You could have seen this coming a mile away. So far I have gotten away with never uploading my ID and/or interacting with one of those companies (though one idiot working for some VC thought it was ok to sign a document on my behalf by uploading my signature!!, never mind a bit of fraud) but it is getting harder and harder. Banks and in some cases even governments forcing you to send data to these operators is a very bad idea. But hey, who ever got hurt by some security theater?
I've had to open a bank account for a company here a few years ago and that was right on the bubble of this happening and they still had an option to come by in person with the proper documentation, which I did, now it is all outsourced.
These companies are the fattest targets and they're run by incompetents. You should assume that anything you give them will eventually be part of some hack.
Edson Brandi is a very clever guy, and I'm surprised he managed to pull this off considering his other professional engagements.
If we could remove everything that's not CO2 (all the extra stuff that comes with diesel oil) the CO2 could be piped to large plastic greenhouses the same kind used in Northern Europe for all-year-round crops. And, if we can't clean the output well enough to grow food, at the bare minimum we can grow stuff for the cellulose for building materials.
There isn't a manual deprecation step, because the agent has no context outside what the human gives it. Deprecation happens when conflicting information is given ("you want to do this but this note says you tried it before and it failed, what do you want to do?").
At that point, either the human decides to go for it and the new decision is noted, and the old decision is superseded/removed, or the human says "wow I'm sure glad I'm using gnosis" and everything is left as-is.
My slice of the world was certainly relevant to me.
You have the freedom to chose to reply or ignore me.
Plenty of comments of "So sad I have been using this".
How many actually contributed back to keep it going?
What I see is a phone motherboard packaged inside an expensive keyboard and screen, with Apple glitter.
Anything to avoid taking responsibility...
That's backwards: it is helpful to keep that in mind precisely when you're suffering in the trenches.
Rich and succesful people try to forget that, which is their hubris.
Thanks, I've been looking for that. Interesting how nowadays it's orders of magnitude cheaper to buy a 4k 65" panel and fake the dots (and sound) on it.
>The only word doing any work at all in that definition is "artifacts"
That's the subject, the only word that is NOT doing any work there (since both regular and software engineering produce artifacts).
Words that do the heavy work in that phrase are:
structured, mature, legally enforced, standards-based approach - for repeatable, reliable, verifiable, - artifacts - under stable external constraints
Software can sometime appear to touch those.
E.g. there are "standards", like HTML or like ARIA, so it's "standards-based" too! But those standards are loosely enforced, usually not mandated, loosely defined, and ad-hoc implemented with all kinds of diverting.
Or e.g. software can some times be repeatable. E.g. reproducible builds (to touch upon one aspect). But that's again left to the implementor, seldom followed (almost never for most software work, only in niche industries).
In general, software is not engineering (in the strict sense) because it's anything goes, all the above conditions can or cannot be handled (in any random set), the final work is a moving target, and verification is fuzzy, if it even happens.
>The reality is that engineering is the methodical application of constraints to solve a problem.
In that case, following specific constraits to solve a math problem, or to draw an artwork (e.g. using perspective) is also "engineering". That's too loose a term to be of any use.
Even accepting that, the degree of "methological" in software "engineering" versus e.g. civic or aviation engineering is orders of magnitude less.
Humans are not supposed to know that yet. You'll get in trouble with management if you continue doing this.
Ironic given that real railways invented the access control "token" for safety purposes in the middle of the nineteenth century: https://en.wikipedia.org/wiki/Token_(railway_signalling)
I think they're sufficiently opposed to high-volume industrial processes as a concept that they would select techniques that cannot be scaled in that way. Part of the art, I suppose.
Edit: ages ago, I thought of but never finished writing down an idea I had for an "anti-masterwork" for electronics. A traditional "masterwork" demonstrates knowledge of the craft by using standard techniques extremely well. So an "anti-masterwork" would demonstrate knowledge by using nonstandard techniques, or deliberately violating best practices, within the constraint of still having to actually work. A bit of a joke or troll.
One of the subideas was "design against manufacturing". Nonreproducible techniques that have to be done by hand. I considered glass and wood but this ceramic would have fit right in.
With a bit more aesthetic consideration you could even make electronic jewelery using ceramic and glass.
Nobody's measuring cancer rates in wild animals.
Due to our long lifespan, humans are relatively vulnerable to radiation, radioactive materials, and other bioaccumulative poisons. A fish might not accumulate enough mercury to kill itself over its lifetime, but when you eat one every day it all adds up.
This was why the disaster was so bad for so many farmers across Europe: https://www.bbc.com/news/uk-wales-36112372 ; the caesium is not enough to kill a sheep, which has a life of one or two years before slaughter, but should not be consumed by humans.
Oh, it's much more stupid than that: OFCOM can't block websites, I just checked and it's available on my phone right now. They've issued a fine to 4chan instead. Which they are ignoring.
Imgur have gone the other direction: they have voluntarily blocked the UK (!), which is very irritating when trying to browse Reddit.
There's certainly a process, but not a good one.
(separate from all this, the Internet Watch Foundation maintains a blocklist which ISPs voluntarily follow, of actual CSAM.)
More like "Dear friend, you have built an Application Container", even more so now with the VCs looking for money in powering WebAssembly containers as pods.
Quite common back when books were the main learning source.
This is not surprising, given how many of us were buying Netbooks, even with their OEM specific Linux distros, until Microsoft came up with the Windows XP discount.
My ASUS 1215B survived from 2009 up to 2024, with multiple Ubuntu LTS updates, HDD replaced with SDD and eventually maxed out to 8 GB.
IF Dell, Asus, Lenovo et all started selling on regular computer stores what is only available to computer nerds on their online stores, this would be much more noticeable.
As it is, normies walk into a store and get to chose between Windows, macOS, Chromebooks, iPad Pro or Android DEX/HyperOS Workstation/....
All cloud vendors are buggy, and if you aren't paying enough there are only bots to talk to.
You can simply look at the GitHub repo where most of the commits say $Name and Claude
Tesla was quickly going bankrupt when Musk decided to buy into it. It was not an "established business". It had no plan, no car, no design, no sales. When you sell a controlling share to another person, the other person then controls the business.
Thinkpads were definitely never cheap.
I strongly disagree. Literally every item in there makes perfect sense if you have an understanding of how evolution works.
E.g. many of these items are simply vestigial in some sense, where their presence doesn't actively harm the species and it doesn't impose any substantial energy budget. E.g. the current top comment here is about male nipples. Male nipples may be "useless", but they're not actively harmful (and they can certainly be pleasurable during sex), so there is no evolutionary pressure to get rid of them. The perineal raphe (i.e. the "male taint stitch") also has no purpose but is simply a byproduct of how the male forms in utero.
As the article points out, most of the other "quirks" are simply what evolution had to deal with. You may say the eye is "weird" because the photoreceptors lie behind the ganglion cells, but it certainly works quite well, generally. And it doesn't "have to" be this way. Octopus eyes are completely the opposite and a great example of convergent evolution.
Other examples are simply tradeoffs. There is pretty obvious survival advantage for humans having a large brain, but this then adds complexity for how the head gets out of the relatively small pelvic canal.
I honestly didn't see any examples in this list that aren't well understood and well explained by scientists. If anything most of these provide excellent examples of how evolution works.
Actually, I like quite a lot of the subtle jokes on HN. It is harder to notice, fewer to find, and I don’t get it many a times. But when I get it (or someone explains it to me, perhaps out of pity), I chuckle, laugh, and laugh again. And I remember those comments.
I'm going to recommend one specific video: https://xoxofest.com/2024/videos/cabel-sasser/ - Cabel Sasser in 2024. I promise you it is worth your time.
"Empathy" isn't a binary in this context. You can exercise empathy and aid your community by making sure everyone you know votes red. That's the kind of cooperation that humans have evolved with. What you're talking about is undifferentiated, universal empathy, where someone would be willing to risk the lives of those close to them for a greater chance to help those who are outside their immediate reach to persuade.
I suspect if you played this game, lots of tight-knit, high-cooperation groups would undertake coordinated campaigns to ensure the survival of their members by ensuring everyone votes red.
> despite having a significantly higher barrier to entry in engineering difficulty and technical knowledge
RF engineering, in particular, is punishing. The subject is viciously hard (you think shared mutable state is hard? Ha!) and, as people pointed out, for most companies, hardware engineering is considered a cost sink, not a revenue driver, something to be avoided if possible. The only parts where it's not is where companies do vertical integration instead of external suppliers.