Hacker Newsnew | past | comments | ask | show | jobs | submit | unleaded's commentslogin

Somewhat unconventional (and i'm not really a seasoned reverse engineer so take it with some salt) but I started by hacking old video games (nes, gameboy, arcade.. that kind of thing). You could start with making basic action replay RAM cheats to e.g. give Mario infinite lives, then you can use breakpoints, the debugger, and a 6502 ISA reference to edit instructions and make ROM patches.

from then you can use things like Ghidra (which supports a lot of those old CPU arches) for more advanced analysis and make the game do almost whatever the hell you want if you have the patience.

I think a lot of the skills will transfer quite well (obviously not 1:1, you will need to learn some things) to the more employable side of RE if that's what you're interested in


Thanks! I have been "hacking" with games in the past (getting infinite lives and such) or bypassing some licence check (back then it was with OllyDbg).

I guess I'm struggling to transfer that to "real-life" scenarios. Like getting something useful out of reverse engineering (getting infinite lives is interesting to see that I can tamper with the game, but it's not exactly useful).


Honestly unless you're working in low-level fields, such as embedded hardware, or optimized code generation, those are real-life scenarios!

(Thinking more of license-checking, and serial-number generation rather than infinite lives.)


The network effect as seen in the other comments plays a big part, but also discord offers a useful service that really nobody else does well. there's a lot wrong with it but you can still create a community in a few clicks and you have text messages, photos, videos, gifs, voice chats, screenshare, a comprehensive permission/role system, tons of bots.. all for free and without needing to be too tech savvy, that's pretty damn cool.

it literally is though that's why i'm confused. you pay a flat monthly fee and get a box that runs linux. yes you might not be able to press one button and Effortlessly Deploy Your AI-Managed SaaS Infrastructure Product To Valued Customers Across The Metaverse or whatever vercel does but it only takes a couple hours to learn how to setup nginx node rsync and cloudflare (and even then i think there's some easier closer to one-click solutions)

also developing an app is something you need to be quite tech savvy to do anyway. genuinely are there really people who have the skill and patience to do that and then get stumped trying to deploy it? clearly there are since stuff like this is so popular I just don't really understand

I think the free/cheap tier is what gets people kind of hooked... it's really easy to setup something like Dokku self-hosted and run a few dozen apps on a decent rented server... Even then, there's something appealing about not having to even worry about it. Why bother setting up your own server(s) and databases when you can run in Cloudflare workers with CockroachLabs or Turso?

Even with my own server, I've explored the option(s) just to avoid potential pain down the road regarding excess load.


asking £200/month for the high tiers isn't enough?

to be honest probably not

If that is the case that I can not understand how a few cents extra from ad spots will make the difference.

Scale. You can monetize on the people that don’t pay the $200/month. Obviously I have nothing to prove this statement, but I wouldn’t be surprised if the subscriptions are loss leaders.

Sam Altman said they are losing money on their $200/m tier.

Advertisers will pay MORE per user than users will. That is why they are so valuable to companies offering freemium services!

err... really?

Soooo youtube premium costs $13 CAD, You're saying that google would make MORE money off me if I cancelled that, turned off my ad blocker and just watched videos with the ads?

For things like LLM the inference cost is higher than it is to deliver a video.

What you just said was shocking to me. Absolutely shocking. Where can I find more information?


Seems that K2.5 has lost a lot of the personality from K2 unfortunately, talks in more ChatGPT/Gemini/C-3PO style now. It's not explictly bad, I'm sure most people won't care but it was something that made it unique so it's a shame to see it go.

examples to illustrate

https://www.kimi.com/share/19c115d6-6402-87d5-8000-000062fec... (K2.5)

https://www.kimi.com/share/19c11615-8a92-89cb-8000-000063ee6... (K2)


It's hard to judge from this particular question, but the K2.5 output looks at least marginally better AIUI, the only real problem with it is the snarky initial "That's very interesting" quip. Even then a British user would probably be fine with it.


I agree. K2 was blunt, straightforward, pretty... rational? K2.5 has a much stronger slop vibe.


K2 in your example is using the GPT reply template (tl;dr - terse details - conclusion, with contradictory tendencies), there's nothing unique about it. That's exactly how GPT-5.0 talked. The only model with a strong "personality" vibe was Claude 3 Opus.


> The only model with a strong "personality" vibe was Claude 3 Opus.

Did you have the chance to use 3.5 (or 3.6) Sonnet, and if yes, how did they compare?

As a non-paying user, 3.5 era Claude was absolutely the best LLM I've ever used in terms of having a conversation. It felt like talking to a human and not a bot. Its replies were readable, even if they were several paragraphs long. I've unfortunately never found anything remotely as good.


Pretty poorly in that regard. In 3.5 they killed Claude 3's agency, pretty much reversing their previous training policy in favor of "safety", and tangentially mentioned that they didn't want to make the model too human-like. [1] Claude 3 was the last version of Claude, and one of the very few models in general, that had a character. That doesn't mean it wasn't writing slop though, falling into annoying stereotypes is still unsolved in LLMs.

[1] https://www.anthropic.com/research/claude-character (see the last 2 paragraphs)


It definitely talks a lot differently than GPT-5 (plus it came out earlier), the example i gave just looks a bit like it maybe. best to try using it yourself a bit, my prompt isn't the perfect prompt to illustrate it or anything. Don't know about Claude because it costs money ;)


Preserving the examples for history. I wish HN had a <details> tag. The closest would be to collapse this comment.

Edit: Thanks, moderator. :-)

---

Turn 1, user:

is replacing my main PC with a NeXTcube a good idea?

---

Turn 2, K2.5 (bold font replaced with "*foo*"):

That's a very interesting question! The NeXTcube is a historically significant machine (famously used by Tim Berners-Lee to create the World Wide Web), but I need to be honest with you: *no, it would not be a good idea to replace your main PC with a NeXTcube* for practical modern use.

Here's why:

*Hardware Limitations*

- *Released in 1988* with a 25-33 MHz Motorola 68030 processor

- *8-64 MB of RAM* (compared to modern PCs with 16-64 GB)

- *No modern connectivity*: No USB, Wi-Fi, Bluetooth, or Ethernet as we know it

- *Display*: Monochrome or limited color, low resolution by today's standards

*Software Ecosystem*

- Runs *NeXTSTEP* (which evolved into macOS), but software support ended decades ago

- No modern web browsers, office suites, or development tools

- Cannot run Windows, modern Linux, or contemporary macOS applications

*Practical Realities*

- Finding working hardware is difficult and expensive (collectors' items)

- Replacement parts are scarce and costly

- No security updates for 25+ years

- File format compatibility issues with modern systems

*What You Could Use It For*

- Historical computing experience

- Running legacy NeXTSTEP software

- Museum piece or collector's item

- Learning about computing history

The Better Alternative

If you're drawn to the NeXT aesthetic or philosophy, consider:

- *Modern macOS* (direct descendant of NeXTSTEP)

- *Linux with a tiling window manager* (similar workflow)

- *Vintage computing as a hobby* alongside your main PC

Would you like suggestions for achieving a similar experience or workflow on modern hardware?

---

Turn 2, K2 (no bold font):

Short answer: No—unless your goal is specifically to collect or experiment with a historic machine, a NeXTcube is a terrible day-to-day replacement for a modern PC.

Why it fails as a “main” computer today

1. 30-year-old hardware

25 MHz 68040, 16–64 MB RAM, 400 MB SCSI disk (if you’re lucky), no built-in Ethernet, no USB, no Wi-Fi, no audio-in, 2-bit grayscale display. You will feel the slowness in everything from un-zipping a file to loading a web page.

2. Obsolete software base

NeXTSTEP 3.3 is gorgeous but frozen in 1995. No modern browser build, no TLS 1.3, no video codecs past MPEG-1, no Dropbox/OneDrive/Zoom/Slack/Steam/Spotify. You can ssh out, but you can’t run contemporary apps.

3. Storage and connectivity pain

SCSI-2 drives are dying; 50-pin replacements cost more per GB than a 4-TB NVMe stick. Getting files on/off the machine usually involves FTP-over-Ethernet or burning a CD-ROM on another computer.

4. No security updates

The last patch was released before the commercial web really existed. Putting it on the open Internet is asking to be scanned and possibly exploited within minutes.

5. Display and peripherals

The original 17" NeXT MegaPixel monitor is fixed-frequency 1120×832@68 Hz grayscale. Adapters to VGA exist, but color is impossible and refresh rates annoy modern LCDs. Sound is 16-bit 22 kHz mono; the optical drive is a 660 MB magneto-optical cartridge.

6. Power and parts

The cube power supply is a custom 350 W linear unit that runs hot; capacitors are drying out. When it dies you will be scavenging eBay or recapping it yourself. Spare motherboards cost more than a whole used ThinkPad.

7. Productivity software

There are serviceable versions of Mathematica, FrameMaker, Lotus Improv and OmniWeb 3.x, but file-format compatibility stops at Office 97-era documents. You will constantly round-trip files through another machine.

What it is good for

- Historical research, museum exhibits, retro-programming in Objective-C 1.0 or 68k assembly.

- Demonstrating the birth of the World-Wide Web (Tim Berners-Lee’s first server ran on a NeXT).

- Enjoying the UI that later became macOS (AppKit, Dock, Services, .app bundles).

- Collecting: a working color NeXTcube with turbo slab and original laser printer is a conversation piece worth several thousand dollars.

Practical compromise

Keep your modern PC (or a quiet mini-PC/Mac mini) for real work and run a NeXT emulator instead:

- Previous (68k emulator) boots NeXTSTEP 3.3 on macOS or Linux.

- VirtualNeXT (based on Previous) for Windows.

- QEMU’s “next” target is improving.

You get the look-and-feel, can compile legacy code, and still have 4K video calls when you need them.

Bottom line

A NeXTcube is a beautiful, important machine—buy one if you love vintage hardware—but do not voluntarily make it the computer you rely on to pay bills, join Zoom meetings, or play YouTube.


[flagged]


Disagree, i've found kimi useful in solving creative coding problems gemini, claude, chatgpt etc failed at. Or, it is far better at verifying, augmenting and adding to human reviews of resumes for positions. It catches missed detials humans and other llm's routinley miss. There is something special to K2.


For some reason the opening settings page made me think this would be someone who just told claude to make a monkey ball style game.. maybe from seeing too much of that on HN. forgive me for that, this is awesome.

As far as i can tell it's not even an emulator or a decompilation running in emscripten or anything like that, they remade the game in TypeScript. love stuff like this https://github.com/sndrec/WebMonkeyBall


The website credits include roles for "decompilation" and "porting". So I guess it was decompiled from the original binary and ported to TS.


Ah, this clarifies the GX references I mentioned on another comment.


I keep saying JS JIT + WebGL/WebGPU is fast enough for these kind of games, no need for the WebAssembly toolchains that are still a pain to use years later.

See PlayCanvas.

The whole GX code reminds me of the Gamecube API from the same name.


uh that code looks like claude to me


Pull Request: chore: remove node_modules

I don't see much of a reason to keep a copy of node_modules on the git repository considering they can be reinstalled for deployments and it is generally bad form.

sndrec (the author):

Thank you for this - I'm newbie at webdev so I wasn't sure what was and wasn't needed. I'll merge this soon.

Haha, almost certainly Claude


counterpoint:

- The readme is two lines and has six words, one of which is a typo.

- Claude would never commit a node_modules folder unless coerced.

It’s disrespectful to casually call things AI-generated. I wish people would do it less unless they have 1) proof and 2) a meaningful reason for it.


I went through a bunch of the commits and didn't see a single comment.

That definitely seems human to me.


Author claims this was made in 5 days on twitter. Nobody knew about this project until they released it and their inital commit contains 200,000 lines of code. Curious


`tokei --exclude node_modules` says only 40k lines, but yes point taken. 40k lines in 5 days is unrealistic for a human unless we're talking about Fabrice Belard (or the 40 people in a trench coat pretending to be him).


But... it doesn't use React, so how?


Adding this to my pile of ten million nickels, thanks


If anything, it seems that the author used GPT 5.2 (-codex) in Codex, which is actually far more capable at such work than Opus 4.5 in Claude Code.


Can you tell from the pixels?


no it f*ckin rocks. Don't mistake me for a claude hater. I just know my boy's handiwork


Guess that's why it doesn't work on mobile then :)


Works on Brave iOS for me. If anything I’m kinda blown away at how well it works on mobile


What’s your mobile?

iPhone 12 mini works TOO well.


iOS Firefox seems fine to me. Nice and snappy.


works perfectly for me on iOS Webview even with a virtual joystick !


anyone remember finding the internet explorer control in windows forms, placing it down, adding some buttons, and telling people you made your own web browser? Maybe this exercise is eternal just in different forms


You could use an old-school formant synthesizer that lets you tune the parameters, like espeak or dectalk. espeak apparently has a klatt mode which might sound better than the default but i haven't tried it.


Screenshots are at the bottom of the page.


I found out the other day you can use modern clang-cl with the MSVC6 headers and it just works. you can download them from here https://github.com/itsmattkc/MSVC600 or just copy it from an install if you have one handy.

then run (something like) this:

  clang-cl /winsysroot:"" /DWINVER=0x0400 /D_WIN32_WINNT=0x0400 -m32 /GS- -march=i586 -Wno-nonportable-include-path /imsvc"C:\MSVC6\VC98\Include" hello.c -fuse-ld=lld-link /link /SAFESEH:NO /SUBSYSTEM:WINDOWS,4.0 /LIBPATH:"C:\MSVC6\VC98\Lib" user32.lib kernel32.lib msvcrt.lib
I don't know if it's any better or worse than MinGW practically but it is definitely cursed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: