What are the most upvoted users of Hacker News commenting on? Powered by the /leaders top 50 and updated every thirty minutes. Made by @jamespotterdev.
There are lots of little hand-crank washing machines on Alibaba and Amazon. Most are plastic and rather fragile looking. Many seem to use the mechanism of salad spinners. The Sears WonderWash seems to be popular.
Zeroing memory is trickier than that, if you want to do it in Rust you should use https://crates.io/crates/zeroize
I backed this project: https://www.crowdsupply.com/modos-tech/modos-paper-monitor on Crowd Supply to see how close they can come to a "monitor" experience with an e-paper display.
I don't know if it even needs to be intentional. On mobile, it's incredibly easy to downvote when you mean to upvote.
Another commenter asked how ventilation is supposed to work -- it does say "air ventilation vents" [1], though it's extremely unclear from photos where those are or how they work, and how it's compatible with not drowning when you get dumped into the sea and they're on the bottom.
But I'm also wondering about where fresh water is coming from and where waste products go. It talks about a water storage bladder/tank, but surely that's intended for weeks max, not a year?
Kudos to those who performed recovery and snatched back from the sands of time.
While I think that's a bit harsh :-) the sentiment of "if you have these problems, perhaps you don't understand systems architecture" is kind of spot on. I have heard people scoff at a bunch of "dead legacy code" in the Windows APIs (as an example) without understanding the challenge of moving millions of machines, each at different places in the evolution timeline, through to the next step in the timeline.
To use an example from the article, there was this statement: "The split to separate repos allowed us to isolate the destination test suites easily. This isolation allowed the development team to move quickly when maintaining destinations."
This is architecture bleed through. The format produced by Twilio "should" be the canonical form, which is submitted the adapter which mangles it to the "destination" form. Great, that transformation is expressible semantically in a language that takes the canonical form and spits out the special form. Changes to the transformation expression should not "bleed through" to other destinations, and changes to the canonical form should be backwards compatible to prevent bleed through of changes in the source from impacting the destination. At all times, if something worked before, it should continue to work without touching it because the architecture boundaries are robust.
Being able to work with a team that understood this was common "in the old days" when people were working on an operating system. The operating system would evolve (new features, new devices, new capabilities) but because there was a moat between the OS and applications, people understood that they had to architect things so that the OS changes would not cause applications that currently worked to stop working.
I don't judge Twilio for not doing robust architecture, I was astonished when I went to work of Google how lazy everyone got when the entire system is under their control (like there are no third party apps running in the fleet). The was a persistent theme of some bright person "deciding" to completely change some interface and Wham! every other group at Google had to stop what they were doing and move their code to the new thing. There was a particularly poor 'mandate' on a new version of their RPC while I was there. As Twilio notes, that can make things untenable.
> a perfectly ambiguous mix of truth and FUD
Congrats on Fil-C reaching heisentroll levels!
"Dixie can't meaningfully grow as a person. All that he ever will be is burned onto that cart;"
"Do me a favor, boy. This scam of yours, when it's over, you erase this god-damned thing."
Things like this and other custom "Windows distros" are a sign that MS would have no problem selling a version of Windows that's nothing more than a base OS, but clearly they would rather take the user-hostile route.
If you are really concerned you should do this and then report back. Otherwise it is just a mild form of concern trolling.
"Heck, why isn't there a cornucopia of new apps, even trivial ones?"
There is. We had to basically create a new category for them on /r/golang because there was a quite distinct step change near the beginning of this year where suddenly over half the posts to the subreddit were "I asked my AI to put something together, here's a repo with 4 commits, 3000 lines of code, and an AI-generated README.md. It compiles and I may have even used it once or twice." It toned down a bit but it's still half-a-dozen posts a day like that on average.
Some of them are at least useful in principle. Some of them are the same sorts of things you'd see twice a month, only now we can see them twice a week if not twice a day. The problem wasn't necessarily the utility or the lack thereof, it was simply the flood of them. It completely disturbed the balance of the subreddit.
To the extent that you haven't heard about these, I'd observe that the world already had more apps than you could possibly have ever heard about and the bottleneck was already marketing rather than production. AIs have presumably not successfully done much about helping people market their creations.
If the government is using the same fake data as the rest of the Internet you want to be using that fake data too. You want to be precise, not accurate. If the FBI records your endpoint as Iran and you say "I wasn't actually sending traffic from Iran, where there are sanctions, I was sending from London but my VPN provider lied on their WHOIS record", you will be in just as much trouble as if you were actually sending data from Iran.
Wait what? I've been wondering why people have been fussing over supply chain vulnerabilities, but I thought they mostly meant "we don't want to get unlucky and upgrade, merge the PR, test, and build the container before the malicious commit is pushed".
Who doesn't use lockfiles? Aren't they the default everywhere now? I really thought npm uses them by default.
I think you did a great job of bringing fairly nuanced problems into perspective for a lot of people who take their interactions with their phone/computer/tablet for granted. That is a great skill!
I think an fertile area for investigation would also be 'task specific' interactions. In XDE[1], the thing that got Steve Jobs all excited, the interaction models are different if you're writing code, debugging code, or running an application. There are key things that always work the same way (cut/paste for example) but other things that change based on context.
And echoing some of the sentiment I've read here as well, consistency is a bigger win for the end user than form. By that I mean even a crappy UX is okay if it is consistent in how its crappy. Heard a great talk about Nintendo's design of the 'Mario world' games and how the secret sauce was that Mario physics are consistent, so as a game player if you knew how to use the game mechanics to do one thing, you can guess how to use them to do another thing you've not yet done. Similarly with UX, if the mechanics are consistent then they give you a stepping off point for doing a new thing you haven't done but using mechanics you are already familiar with.
[1] Xerox Development Environment -- This was the environment everyone at Xerox Business Systems used when working on the Xerox Star desktop publishing workstation.
Find problems to solve with code, and write code to solve those problems. You’re building muscle strength in the ability to rapidly pattern match to potential reference code paths.
CLAUDE.md is read on session startup.
If you're continually finding that it's being forgotten, maybe you're not starting fresh sessions often enough.
Indeed, the Salt River Project nuclear generator uses reclaimed sewage water for cooling.
Plenty of people were pointing out that voting machines had poor security for about two decades. Even before that, there was the mechanically disastrous Bush vs Gore Florida ballot.
America being what it is, with endless Voting Rights Act lawsuits required to keep the southern states running vaguely fair elections, it was impossible to get a bipartisan consensus that elections should actually be fair. And so the system deteriorates.
Is there any real-life situation in which this matters, though?
If you're picking a country so you can access a Netflix show that geolimits to that country, but Netflix is also using this same faulty list... then you still get to watch your show.
If you're picking a country for latency reasons, you're still getting a real location "close enough". Plus latency is affected by tons of things such as VPN server saturation, so exact geography isn't always what matters most anyways.
And if your main interest is privacy from your ISP or local WiFi network, then any location will do.
I'm trying to think if there's ever a legal reason why e.g. a political dissident would need to control the precise country their traffic exited from, but I'm struggling. If you need to make sure a particular government can't de-anonymize your traffic, it seems like the legal domicile of the VPN provider is what matters most, and whether the government you're worried about has subpoena power over them. Not where the exit node is.
Am I missing anything?
I mean, obviously truth in advertising is important. I'm just wondering if there's any actual harm here, or if this is ultimately nothing more than a curiosity.
The irony is that many dynamic languages have had optional typing for decades.
BASIC, Common Lisp, xBase/Clipper, Perl
So yet again we got a cycle of people rediscovering the tools that predated them.
> In the Trek universe, LCARS wasn't getting continuous UI updates
In the Trek universe, LCARS was continuously generating UI updates for each user, because AI coding had reached the point that it no longer needs specific direction, and it responds autonomously to needs the system itself identifies.
I use Junie to get tasks done all the time. For instance I had two navigation bars in an application which had different styling and I told it make the second one look like the first and... it made a really nice patch. Also if I don't understand how to use some open source dependency I check the project out and ask Junie questions about it like "How do I do X?" or "How does setting prop Y have the effect of Z?" and frequently I get the right answer right away. Sometimes I describe a bug in my code and ask if it can figure it out and often it does, ask for a fix and often get great results.
I have a React application where the testing situation is FUBAR, we are stuck on an old version of React where tests like enzyme that really run react are unworkable because the test framework can never know that React is done rendering -- working with Junie I developed a style of true unit tests for class components (still got 'em) that tests tricky methods in isolation. I have a test file which is well documented explaining the situation around tests and ask "Can we make some tests for A like the tests in B.test.js, how would you do that?" and if I like the plan I say "make it so!" and it does... frankly I would not be writing tests if I didn't have that help. It would also be possible to mock useState() and company and might do that someday... It doesn't bother me so much that the tests are too tightly coupled because I can tell Junie to fix or replace the tests if I run into trouble.
For me the key things are: (1) understanding from a project management perspective how to cut out little tasks and questions, (2) understanding enough coding to know if it is on the right track (my non-technical boss has tried vibe coding and gets nowhere), (3) accepting that it works sometimes and sometimes it doesn't, and (4) recognizing context poisoning -- sometimes you ask it to do something and it gets it 95% right and you can tell it to fix the last bit and it is golden, other times it argues or goes in circles or introduces bugs faster than it fixes them and as quickly as you can you recognize that is going on and start a new session and mix up your approach.
There's a counterintuitive pricing aspect of Opus-sized LLMs in that they're so much smarter that in some cases, it can solve the problem faster and with much fewer tokens that it can end up being cheaper.
> Are you saying “alas for citizens of the US who see things in competitive nationalist terms”?
He’s saying it as a realist.
China is building the equivalent to America’s sanctions power in their battery dominance. In an electrified economy, shutting off battery and rare earths access isn’t as acutely calamitous as an oil embargo, but it’s similarly shocking as sanctions and tariffs.
This is a quote from the concluding paragraphs of https://obie.medium.com/what-happens-when-the-coding-becomes...
A broken clock is right twice a day.
Your mom was especially courageous to do this as a child!
> it's a freaking mess to deal with recycling and often times, garbage we don't know what to do
I love that this is followed by “so go nuclear!”
no paywall: https://archive.ph/jvNK7
> I'm wondering if altruism is in decline
Altruism and empathy, by name, are targets of derogation by a major political movement in the US, at least. So, yeah, absolutely.
> I sometimes even get the feeling that altruism is seen as a weakness these days.
This is fairly explicitly the case, yes.
Respectfully, this has become a message board canard. Go is absolutely a memory safe language. The problem is that "memory safe", in its most common usage, is a term of art, meaning "resilient against memory corruption exploits stemming from bounds checking, pointer provenance, uninitialized variables, type confusion and memory lifecycle issues". To say that Go isn't memory safe under that definition is a "big if true" claim, as it implies that many other mainstream languages commonly regarded as memory safe aren't.
Since "safety" is an encompassing term, it's easy to find more rigorous definitions of the term that Go would flunk; for instance, it relies on explicit synchronization for shared memory variables. People aren't wrong for calling out that other languages have stronger correctness stories, especially regarding concurrency. But they are wrong for extending those claims to "Go isn't memory safe".
I'm pretty sure Burrough's venerable OS, so user hostile that it inspired a brilliant movie villain, is not a fad.
Oh... You mean that other MCP... Oh well...
That's absolutely amazing.
On one of my many trips to Europe, I was wandering around the downtown area, and having walked a great deal sat down on a park bench to rest.
Two very beautiful young ladies came up to me, and said you look like you need a hug. Instantly my spidey sense went on red alert, as I figured these two were pickpockets or scammers or ladies of the evening, since I was much too old to be of interest to them, and no woman has ever remarked that I was handsome. I asked them what they were doing, and they said they were just doing a project spreading kindness.
So I said ok and one of them gave me a truly wonderful hug, and I said thank you and they went on their way.
All I can say is "wow".
Somebody did this back in the DOS era. The program was sometimes called "the crooked accountant's spreadsheet", because you could start with the outputs you wanted and get the input numbers adjusted to fit.
Anyone remember?
> will cause vastly more moderation and the disappearance of many or most comment sections
We really don’t know this.
High voltage transmission lines are really quite efficient, and concentrating generation is usually the right choice.
That said, it doesn't make sense to have just a single place for the entire country, as there are multiple grids in the US (primarily East, West, and Texas), and with very long transmission you can get into phase issues.
The Claude MD is like the documentation you hand to a new engineer on your team that explains details about your code that they wouldn't otherwise know. It's not bad to need one.
I'd be interested in seeing some of the neuroscience research, because the narrative spun by this post - that the primary reason for a change to the "zero fucks to give" attitude is hormonal and biological - seems weak to me.
I'm also someone (a man FWIW, as the article was mainly focused on the experience of women) who reached an abrupt mental shift in my late forties. And sure, there could be some underlying biological shift I'm not conscious of, but a lot of it is simply that "pretending" at this stage no longer serves a useful purpose, and most people become aware of it at this stage of life.
I love the saying "over the hill" because it gives me a good visual of what's going on. When you're young, and looking up "from the bottom of the hill", you can fantasize about all sorts of possibilities and outcomes that can happen to you. As you age, though, more and more avenues get cut off - you're not going to be the sports, movie or rock star you dreamed of, you're not going to invent a cure for cancer, you're not going to become a billionaire, etc. When you're "over the hill", you can see pretty much into the valley below, and you have to be realistic about the possibilities. I think a lot of people may switch their "people pleasing" ways because they stop fantasizing about the benefits that may happen by "keeping all doors" open. You see you no longer have infinite time left, and you decide where to spend it more wisely. It's like the famous Confucius quote "Every man has two lives, and the second begins when he realizes he has only one."
One reason I didn't like this essay is that it seems to be trying, ironically, to explain this change in perspective/behavior, and the negative response that can come from it, especially for women, as a "biological/hormonal consequence". The whole point of having "no fucks left to give" is that you don't care how others respond to your less pleasing attitude. If you're still trying to explain it so you can understand (or try to ignore) other's negative responses, I feel like you've missed the point.
Solar and storage is the cheapest form of power now. Prices for both will continue to decline.
Battery storage hits $65/MWh – a tipping point for solar - https://news.ycombinator.com/item?id=46251705 - December 2025
Plenty of huge businesses keep all their critical data in the cloud. If they were banned from Microsoft 365 they would instantly go out of business.
Its pretty much the same thing as in every previous age, where not having a community of experience and the supporting materials they produce has been a disadvantage to early adopters of a new language, so the people that used it first were people with a particular need that it seemed to address that offset that for them, or that had a particular interest in being in the vanguard.
And those people are the people that develop the body of material that later people (and now LLMs) learn from.
I paid $5,000 in 2007 for the best TV you could buy at the time: Pioneer Kuro Elite 50” 1080p plasma. I’m still using it as my only TV. For the past 5 years I’ve been looking to upgrade/replace it with a state-of-the-art top-of-the-line 4k OLED/micro-OLED/quantum dot/etc. — but when I go to look at current screens, none match the almost 3D depth and beauty of my plasma display. So, I’m patiently waiting for my 18-year-old TV to stop working — but much to my amazement it’s never ever needed service! Edit: Smart TVs appeared in 2007-8; mine did not offer this “feature.”
Every generation runs LLMs faster than the previous one.
I just shipped a new one of these a few minutes ago (from my phone).
I found out about a new Python HTML parsing library - https://github.com/EmilStenstrom/justhtml - and wanted to try it out but I'm out without my laptop. So I had Claude Code for web build me a playground interface for trying it out: https://tools.simonwillison.net/justhtml
It loads the Python library using Pyodide and lets you try it out with a simple HTML UI.
The prompts I used are in this PR: https://github.com/simonw/tools/pull/156
> Could X just basically stop moderating all together?
An algorithmic feed is one of the things that would make them a publisher without Section 230. So, they could, but they wouldnt be anything like X anymore.
> Basically is the law consistent enough to adopt a hands off strategy to maintain liability protection?
No, that’s why section 230 was adopted, to address an existential legal threat to any site of non-trivial scale woth user generated content. Withoutt section 230 or a radical revision of lots of other law, the only practical option is for providers to do as much review and editing of, and accept the same liability for, UGC as they would for first-party content.
If you wanted to tighten things up without intentionally nuking UGC as a viable thing for internet businesses practically subject to US jurisdiction, you could revise 230 to explicitly not remove distributor liability (it doesn't actually say it does and the extension to do this by the courts was arguably erroneous), which would give sites an obligation to respond to actual knowledge of unlawful content but not presume it from the act of presenting the content. But the “repeal 230” group isn't trying to solve problems.
This is about minimizing attack surface. Not only could secrets be leaked by hacking the OS process somehow to perform arbitrary reads on the memory space and send keys somewhere, they could also be leaked with root access to the machine running the process, root access to the virtualization layer, via other things like rowhammering potentially from an untrusted process in an entirely different virtual context running on the same machine, and at the really high end, attacks where the government agents siezing your machine physically freeze your RAM (that is, reduce the physical temperature of your RAM to very low temperatures) when they confiscate your machine and read it out later. (I don't know if that is still possible with modern RAM, but even if it isn't I wouldn't care to bet much on the proposition that they don't have some other way to read RAM contents out if they really, really want to.) This isn't even intended as a complete list of the possibilities, just more than enough to justify the idea that in very high security environments there's a variety of threats that come from leaving things in RAM longer than you absolutely need to. You can't avoid having things in RAM to operate on them but you can ensure they are as transient as possible to minimize the attack window.
If you are concerned about secrets being zeroed out in almost any language, you need some sort of support for it. Non-GC'd languages are prone to optimize away zeroing out of memory before deallocation, because under normal circumstances a write to a value just before deallocation that is never effectfully read can be dropped without visible consequence to the rest of the program. And as compilers get smarter it can be harder to fool them with code, like, simply reading afterwards with no further visible effect might have been enough to fool 20th century compilers but nowadays I wouldn't count on my compiler being that stupid.
There are also plenty of languages where you may want to use values that are immutable within the context of the language, so there isn't even a way to express "let's zero out this RAM".
Basically, if you don't build this in as a language feature, you have a whole lot of pressures constantly pushing you in the other direction, because why wouldn't you want to avoid the cost of zeroing memory if you can? All kinds of reasons to try to avoid that.
Yet another Rust article that ignores the choice among memory safe languages, including the ML linage that inspired Rust's type system.
The only reason to pick Rust instead of one of those, is exactly because the memory safety without GC that it offers, when any form of automatic resource management, be it in whatever form, is not possible, or welcomed by the domain experts.
Not explosive (mostly Pu238), not water soluble, encapsulated in ceramic. If someone ground it up in a high energy ball mill and incorporated into cigarettes it would be devastating, in fact US regulators are very wary of the French Pu + HEBM fuel fabrication process. As an intact lump in a glacier I wouldn't worry about it -- there's sort of an expectation that that kind of device will get lost in desolate places.
What a great job he did. It looks very professional, even though the numbers produced must be fairly low. I wonder how the shutter mechanism works, on most medium format cameras that's a work of art and a project in its own right.
None. Not everything that "can feel pain" is our responsibility.
What's our responsibility and what's not is based on made up morals, which are based on either evolutionary benefits and dangers combined with random historical developments.
Yeah it is, that's why I disclose this stuff.
The counter-incentive here is that my reputation and credibility is more valuable to me than early access to models.
This very post is an example of me taking a risk of annoying a company that I cover. I'm exposing the existence of the ChatGPT skills mechanism here (which I found out about from a tip on Twitter - it's not something I got given early access to via an NDA).
It's very possible OpenAI didn't want that story out there yet and aren't happy that it's sat at the top of Hacker News right now.
I did the same, then put in 14 3090's. It's a little bit power hungry but fairly impressive performance wise. The hardest parts are power distribution and riser cards but I found good solutions for both.
> "It's really not a very fair statement to say that they missed the digital photography that they actually had invented," he says.
I had no idea he's an animator, that's so cool! In that video he says "Lightwave is so deep, I won't live long enough to see everything that's in it". I'm glad he's proven wrong there!
I used to have an eBay account, and at some point, despite not having used it for a year or so, I got an email saying I was permanently banned from eBay.
No appeal, no reasons given, no possible way to create another account.
Just. Banned.
The companies need to be big enough to provide the amazing services they do, but once they are large enough they will never care about individuals.
My internal model of large companies is that they are intelligent, psychopathic aliens. The people in them are like cells in our body, important for the function, but with no agency, and they are not who you are dealing with.
You're dealing with the company, and it's an inhuman, psychopathic alien.
It would make more sense to stop offering gift cards, which make zero financial sense for the consumer, but why stop offering a lucrative product that people buy because they're bad at logic, when you can just shut down accounts and greatly inconvenience people at no cost to you?
All of them? Or some? Because I’m halfway sure I can find a Congressman who supports almost anything.
Unfortunate name, as "CM0" is a common abbreviation for the ARM Cortex-M0 core.
Browsing the web on here is almost completely out of the question, since it only has 512 Megs of RAM
How far we have fallen... a quadcore 1GHz CPU and 512MB of RAM seems like ample computing power for those who have been very productive on PCs with far less.
They could censor that in Chrome as well, in multiple ways. That's one reason why having your DNS services provider, browser provider and search provider as the same entity is an extra risk.
And a set of people rediscovered why cross compiling only works up to certain extent, regardless of the marketing on the tin.
The point one needs to touch APIs that only exists on the target system, the fun starts, regardless of the programming language.
Go, Zig, whatever.
Maybe Apple should rethink bringing back Mac Pro desktops with pluggable GPUs, like that one in the corner still playing with its Intel and AMD toys, instead of a big box full of air and pro audio cards only.
Yeah, at some point people are going to work out that the problem isn't Johnny, it's email. Email is distinctively hostile to secure messaging. No matter what software Johnny uses, "secure" email will always be inferior to alternative options.
https://www.latacora.com/blog/2020/02/19/stop-using-encrypte...
To paraphrase an old saying: Live by Big Tech, die by Big Tech.
After nearly 30 years as a loyal customer
I've heard others say this (and was a "loyal advocate" of Windows for around 2 decades myself), but the reality is they simply do not care. You are merely a single user out of several billion.
Many of the reps I’ve spoken to have suggested strange things
That almost sounds like some sort of AI, not a human. But if I were in your situation I'd be inclined to print out that response as evidence, and then actually go there physically to see what happens.
The trouble with formal specification, from someone who used to do it, is that only for some problems is the specification simpler than the code.
Some problems are straightforward to specify. A file system is a good example. The details of blocks and allocation and optimization of I/O are hidden from the API. The formal spec for a file system can be written in terms of huge arrays of bytes. The file system is an implementation to store arrays on external devices. We can say concisely what "correct operation" means for a file system.
This gets harder as the external interface exposes more functionality. Now you have to somehow write down what all that does. If the interface is too big, a formal spec will not help.
Now, sometimes you just want a negative specification - X must never happen. That's somewhat easier. You start with subscript checking and arithmetic overflow, and go up from there.
That said, most of the approaches people are doing seem too hard for the wrong reasons. The proofs are separate from the code. The notations are often different. There's not enough automation. And, worst of all, the people who do this stuff are way into formalism.
If you do this right, you can get over 90% of proofs with a SAT solver, and the theorems you have to write for the hard cases are often reusable.
I can see "no progress in 50 years" in fundamental physics where the experimental frontier seems to be running away from us (though recent gamma astronomy results suggest a next generation accelerator really could see the dark matter particle)
In biology or chemistry it's absurd to say that -- look at metal organic frameworks or all kinds of new synthetic chemistry or ionic liquids or metagenomics, RNA structure prediction, and unraveling of how gene regulation works in the "dark genome".
Progress in the 'symbolic AI' field that includes proof assistants is a really interesting story. When I was a kid I saw an ad for Feigenbaum's 3-volume "Handbook of AI" and got a used copy years later -- you would have thought production rules (e.g. "expert systems" or "business rules") were on track to be a dominant paradigm but my understanding was that people were losing interest even before RETE engines became mainstream and even the expert system shells of the early 1980s didn't use the kind of indexing structures that are mainstream today so that whereas people we saying 10,000 rule rule bases were unruly in the 1980s, 10,000,000 well-structured rules are no problem now. Some of it is hardware but a lot of it is improvements in software.
SAT/SMT solvers (e.g. part of proof assistants) have shown steady progress in the last 50 years, though not as much as neural networks because they are less parallelization. There is dramatically more industrial use of provers though business rules engines, complex event processing, and related technologies are still marginal in the industry for reasons I don't completely understand.
Bleen Jay. It's more blue than green, and and also forms a mildly amusing pun, which is good for marketing.
You're mad about a 'split of opinions on new thing' story from nearly 4 years ago?
I don't believe you're serious. If Mr Rogers were broadcasting for the first time now opponents of public media would be deriding it as woke propaganda and worse.
How were we not already doing this?
Parent meant this as a statement of fact (stating it's x that lies, and implying it's not y, or that x lies more than y). As such (true or not) it makes perfect sense, and requires only a very intuitive and casual understanding to get it.
Your comment reads as if it was some failed attempt at some kind of axiomatic construction (x lies _therefore_ y doesn't).
Right. Congress has the power to preempt state law in an area related to interstate commerce by legislating comprehensive rules. The executive branch does not have the authority to do that by itself.
This is like Trump's "pardon" of someone serving time for a state crime. It does little if anything.
Quite a number of AI-related bills have been introduced in Congress, but very few have made much progress. Search "AI" on congress.gov.
>Dyslexia seems to be more of an issue in English than other languages right?
I don't think so. It's medicalization or pathologization of dyslexia that's probably more of a thing in Engish. Same way many issues get medicalized and whole cottage industries and jobs grow around them
> "Below are the brands I’ve identified as most likely to have dumb TVs available for purchase online as of this writing."
That just has to be an LLM at work.
I wouldn't send all my typing across all apps on a third party company. Even worrying they get to the mobile OS company it's already stretching it...
As a Plex user I'd recommend a used last-gen game console as a TV source. In my AV room upstairs I've had an XBOX ONE S for a long time and more recently I got a PS4 Pro for the spare room downstairs -- both at Gamestop. I have some games for both of them but I am more likely to game on Steam, Steam Deck or mobile.
Every Android-based media player I've had tried just plain sucks, the NVIDIA Shield wasn't too bad but at some point the controller quit charging. You can still get a game console with a built-in Blu-Ray player too and it's nice to have one box that does that as well as being an overpowered for streaming.
I have a HDHomeRun hooked up to a small antenna pointed at Syracuse which does pretty well except for ABC, sometimes I think about going up on the roof and pointing the small one at Binghamton and pointing a large one at Syracuse but I am not watching as much OTA as I used to. It's nice though being able to watch OTA TV on either TV, any computer, tablets, phones, as well as the Plex Pass paying for the metadata for a really good DVR side-by-side with all my other media.
As for TVs I go to the local reuse center and get what catches my eye, my "monitor" I am using right now is a curved Samsung 55 inch, I just brought home a plasma that was $45 because I always wanted a plasma. I went through a long phase where people just kept dropping off cheap TVs at my home, some of which I really appreciated (a Vizio that was beautifully value engineered) and some of which sucked. [1]
[1] ... like back in the 1980s everybody was afraid someone would break into your home and take your TV but for me it is the other way around
I think the standard answer here is modernc.org/sqlite.
> In 1994, came the Pentium with its FDIV bug: a probably insignificant but detectable error in floating-point division. The subsequent product recall cost Intel nearly half a billion dollars. John Harrison, a student of Mike’s, decided to devote his PhD research to the verification of floating-point arithmetic.
No mention of the effort by Boyer and Moore, then at their Computational Logic, Inc., to do a formal verification of the AMD FPU for the AMD5K86TM. The AMD chip shipped with no FDIV bug. [1]
I'm very confused. This is the font that matches the Google branding, and that they started using as a UX font in Gmail, Docs, etc.
I hate it in UX because it's so "geometric" -- works well for a logo, but not body text, so it's just a bizarre choice for UX. Unlike Roboto which continues to be great for that. (Google Sans is fine as display text though -- headings, logo, etc.)
But my understanding was that Google wanted to differentiate its first-party apps from other Android apps with a proprietary Google font.
But now they're opening that font up for everyone to use, so Google's apps will no longer look uniquely Google-branded.
I'm so confused what the heck is going on over there in Mountain View.
I think the point was that it's not a machine.
Stuff that we can deduce in math with common sense, geometric intuition, etc. can be incredibly difficult to formalize so that a machine can do it.
Wow! See the classic https://en.wikipedia.org/wiki/TK_Solver
Z-Image Base and Z-Image Edit have been announced as being the same size (or, at least, the whole set has been announced as being in the 6B size class) as Turbo, but slower (50 steps with CFG, apparently, from the announced 100 NFEs compared to Turbo's 9 NFEs, where turbo doesn't, in the use they reference, use CFG.)
Define "washed out"?
The white and black levels of the UX are supposed to stay in SDR. That's a feature not a bug.
If you mean the interface isn't bright enough, that's intended behavior.
If the black point is somehow raised, then that's bizarre and definitely unintended behavior. And I honestly can't even imagine what could be causing that to happen. It does seem like that it would have to be a serious macOS bug.
You should post a photo of your monitor, comparing a black #000 image in Preview with a pitch-black frame from a video. People edit HDR video on Macs, and I've never heard of this happening before.
Yes, it is the prominent anti-vax (and vaccines-cause-autism) activist (and Andrew Wakefield defender after his fraud and data manipulation was uncovered) Jenna McCarthy. Why does this inspire skepticism? (Skepticism that she would be the co-author; it should certainly inspire skepticism of the work itself.)
One of the few unqualified improvements that “X” (aka Twitter) made was rendering the usernames in a font that has wings for the lowercase L
Indeed. Plus basic facts like: is it serif or sans? Proportional or monospace? Designed for GUI interfaces, terminals, or print? I still don't know.
Just showing a single screenshot of it in its intended use would go a long way.
I clicked on one of the charts and had no idea if the font itself was bitmap, or if it had just been rendered at a tiny size without antialiasing.
>A model doesn’t “know” facts or measure uncertainty in a Bayesian sense. All it really does is traverse a high‑dimensional statistical manifold of language usage, trying to produce the most plausible continuation.
And is that that different than what we do under the scenes? Is there a difference between an actual fact vs some false information stored in our brain? Or both have the same representation in some kind of high‑dimensional statistical manifold in our brains, and we also "try to produce the most plausible continuation" using them?
There might be one major difference is at a different level: what we're fed (read, see, hear, etc) we also evaluate before storing. Does LLM training do that, beyond some kind of manually assigned crude "confidence tiers" applied to input material during training (e.g. trust Wikipedia more than Reddit threads)?
Lots of elegant, minimal things are hard to use effectively.
> All these weird mental gymnastics to argue that users should have less rights
We probably agree more than not. But users getting more rights isn’t universally good. To finish an argument, one must consider the externalities involved.
For those used to traditional language syntax, anything in the APL family is like Chinese to someone who only knows Latin-family natural languages. It's always amusing to see all the reaction comments when APL/J/K is posted here.