> Pre-LLM world one would at least have had to search for this information, find the site, understand the license and acknowledge who the author is. Post LLM the tool will just blatantly plagiarize someone else work which you can then sign off on as your own
These don't contradict each other though, you could "blatantly plagiarize someone else work" before as well. LLMs just add another layer in between.
Copyright violation would happen before LLMs yes, but it would have to be done by a person who either didn’t understand copyright (which is not a valid defence in court), or intentionally chose to ignore it.
With LLMs, future generations are growing up with being handed code that may or not be a verbatim copy of something that someone else originally wrote with specific licensing terms, but with no mention of any license terms or origin being provided by the LLM.
It remains to be seen if there will be any lawsuits in the future specifically about source code that is substantially copied from someone else indirectly via LLM use. In any case I doubt that even if such lawsuits happen they will help small developers writing open source. It would probably be one of the big tech companies suing other companies or persons and any money resulting from such a lawsuit would go to the big tech company suing.
> That requires a high level of trust in your current government and whomever is in charge in the future.
Some entity has to be trusted with our data anyway, at least government supposed to have some accountability before the citizens, corporations have much higher incentives for profit.
Why is it a given that we need to trust an entity with our data? Most of human history got by without data collection, centralized or otherwise, there's no innate law of nature requiring it
It doesn't require only trusting the government (or another corporation) today, it requires trusting all future iterations of them as well. It may be a different story if the data was periodically purged, say after each administration for example.
There are still a lot of underlying assumptions here worth noting though. You're assuming we must have a government and what it must be able to do, like charge me taxes or gatekeep certain activities behind licensing systems.
I'm not arguing we don't need a government. But to silently take for granted that everything from income taxes to public roads and travel restrictions are a given jumps ahead here.
We could decide, for example, that the government shouldn't be allowed to centralize certain data and remove some of what we expect them to do instead.
> We could decide, for example, that the government shouldn't be allowed to centralize certain data and remove some of what we expect them to do instead.
How exactly government manages our data is a valid concern and in the modern world this needs to be reevaluated.
I used a few different lisps for pet projects and honestly today for me the biggest problem of lisps is the typing. ADTs (and similar systems) are just super helpful when it comes to long term development, multiple people working on code, big projects or projects with multiple pieces (like frontend+backend) and it helps AI tools as well.
And this in not something lisps explored much (is there anything at all apart from Racket/typed dialect?), probably due to their dynamic nature. And this is why I dropped lisps in favour of Rust and Typescript.
SBCL has fine type checking. Some is done at compile time -- you get warnings if something clearly can't be type correct -- but otherwise when compiled with safety 3 (which people tend to make the default) types are checked dynamically at run time. You don't get the program crashing from mistyping as one would in C.
> You don't get the program crashing from mistyping as one would in C.
Sorry but I don't compare to C anymore, I want the same safety as in Rust or Typescript: exhaustive checks, control-flow type narrowing, mapped types and so on. Some detection at compile time is not enough, since there is a way to eliminate all type errors I want to eliminate them all, not some.
Why stop there? Why not demand proof of correctness? After all, that's now within reach using LLMs producing the formal specification from a simple prompt, right?
SBCL does a fine job in detecting type mismatches within the frame of ANSI Common Lisp, not Haskell. While I would agree that a strict type system eases long term maintenance of large systems, for "explorative computing", proof-of-concepts, RAD or similar that tends to get in the way. And if such proof-of-concept looks promising, then there is no shame in rewriting it in a language more suitable for scale and maintenance.
Proof of correctness would be fantastic, but I have yet to see it in action. LLMs maybe could do it for simple program, but I'm pretty sure it will fail in large codebases (due to context limits), and types help a lot in that case.
> for "explorative computing", proof-of-concepts, RAD or similar that tends to get in the way
I would even argue that its better to have typed system even for POCs, because things change fast and it very often leads to type errors that need to be discovered. At least when I did that I often would do manual tests after changes just to check if things work, with typing in place this time can also be minimised.
> You don't get the program crashing from mistyping as one would in C.
Uh, isn't that exactly what happens with runtime type checking? Otherwise what can you do if you detect a type error at runtime other than crash?
In C the compiler tries to detect all type errors at compile time, and if you do manage to fool the compiler into compiling badly typed code, it won't necessarily crash, it'll be undefined behavior (which includes crashing but can also do worse).
> Uh, isn't that exactly what happens with runtime type checking?
No, it raises an exception, which you can handle. In some cases one can even resume via restarts. This is versus C, where a miscast pointer can cause memory corruption.
Again, a proper C compiler in combination with sensible coding standards should prevent "miscast pointers" at compile time / static analysis. Anyway, being better than C at typing / memory safety, is a very low bar to pass.
I'm curious in what situation catching a typing exception would be useful though. The practice of catching exceptions due to bugs seems silly to me. What's the point of restarting the app if it's buggy?
Likewise, trying to catch exceptions due to for example dividing by zero is a strange practice. Instead check your inputs and throw an "invalid input" exception, because exceptions are really only sensible for invalid user input, or external state being wrong (unreadable input file, network failures, etc.).
If "just don't do the bad things" is a valid argument, why do we need type checking at all?
Exceptions from type checking are useful because they tell you exactly where something has screwed up, making fixing the bug easier. It also means problems are reduced from RCEs to just denial of service. And I find (in my testing) that it enables such things as rapid automated reduction of inputs that stimulate such bugs. For example, the SBCL compiler is such that it should never throw an exception even on invalid code, so when it does so one can automatically prune down a lambda expession passed to the COMPILE function to find a minimal compiler bug causing input. This also greatly simplifies debugging.
A general reason I look down on static type checking is that it's inadequate. It finds only a subset, and arguably a not very interesting subset, of bugs in programs. The larger set of possible bugs still has to be tested for, and for a sufficient testing procedure for that larger set you'll stimulate the typing bugs as well.
So, yeah, if you're in an environment were you can't test adequately, static typing can act as a bit of a crutch. But your program will still suck, even if it compiles.
The best argument for static typing IMO is that it acts as a kind of documentation.
That's very silly, because in languages that do static type checking right, like Haskell, Rust, or to a lesser extent TypeScript, if you have a typing bug your program won't even compile. How's that "just don't do bad stuff"?
> It finds only a subset, and arguably a not very interesting subset, of bugs in programs. The larger set of possible bugs still has to be tested for, and for a sufficient testing procedure for that larger set you'll stimulate the typing bugs as well.
I agree that having your program be properly typed is a minumum requirement for it to even be possibly correct. So why would you not check that requirement statically? Skipping it is akin to not checking the syntax at compile time, instead preferring to crash at runtime due to invalid syntax. Or worse, throwing an exception. And then you're supposed to catch the "missing parenthesis exception"?
If you're building a bridge, presumably you'll calculate the forces and everything, before putting it together and verifying it actually holds for those forces. Likewise, if I specify that my function f returns a positive integer, why would I not let the compiler check that assumption statically? Are you really saying you'll write a test like (assert (> (f) 0)) or whatever? Seems like a huge waste of time building the app and starting up the test suite, when you could just as well just have stopped the build when the compiler saw you were returning the wrong type.
> For example, the SBCL compiler is such that it should never throw an exception even on invalid code
What does this even mean? What does it do if you run (/ 1 x) where x = 0?
> So, yeah, if you're in an environment were you can't test adequately, static typing can act as a bit of a crutch. But your program will still suck, even if it compiles.
This is a ridiculous argument, and is much closer to "just don't do the bad things" than what I said. Also note that I never said "don't write tests". You're making a complete strawman by pitting type checking against testing. It's two complimentary methods of verifying correctness, and the type system is a quicker and more accurate way of catching bugs, so moving all logic that you possibly can to the type system and away from tests is a huge win in efficiency and correctness. That doesn't mean skip testing, just like checking the syntax of your program statically doesn't mean skip testing.
Just because I'm wearing a seat belt (type checking) doesn't mean I'll drive recklessly (skip testing). Likewise, just being a careful driver (writing tests) doesn't mean you should stop wearing your seat belt (type checking).
You can run Coalton on Common Lisp. It has a type system similar to Haskell’s. And interops very well with pure Common Lisp. It also modernizes function and type names in the process so it makes Lisp more familiar to modern developers. I tried it in a small project and was impressed.
> Or if the code is a mess. Or if it doesn't follow conventions.
In my experience these things are very easily fixable by ai, I just ask it to follow the patterns found and conventions used in the code and it does that pretty well.
I've recently worked extensively with "prompt coding", and the model we're using is very good at following such instructions early on. However after deep reasoning around problems, it tends to focus more on solving the problem at hand than following established guidelines.
Still haven't found a good way to keep it on course other than "Hey, remember that thing that you're required to do? Still do that please."
The problem is that people think socialisation is some mandatary thing, like food or air, but the truth is - it is not.
We are born alone and we will die alone, there is nothing bad about it, it is just how life is. You can have people around you but in your thoughts in your emotions, in your experiences you are always alone. There been lots and lots of people who would live just fine, very productive and profound lives and were socially alone.
Once you realize it - the problem is gone, or rather you see that there was no problem, just a certain conditioning by society which you grew up in. What can help here are not psychological nonsence, but some meditations definitely push you towards this (and other types of) realisation.
Disagree completely. All the most substantive experiences and memories of ones life happen within groups. Born alone die alone is reductive given each person only has fragments of information and insight individually, so we spend most our time together in some form
You are born out of another human but even they can never understand you fully, they can never experience you fully, they can never know you fully. The problems of parents and kids are all coming from that and are old as time. If even parents can't do that there is no chance with other people.
This is the trick of the universe, or a trade-off if you will: being completely alone also means being completely free in terms of internal experience. If you realise that - that is the greatest gift, if you are unaware it can feel like a curse.
I don't think so. Solipsism is not being sure about what exists outside of mind.
I'm saying that others can't understand you fully, not just the mind but whole combination of memory, emotions, experiences, etc. Therefore being alone is not different from being surrounded by other people, I'm not saying they don't exist or anything like that.
I'm not talking about "liking" it, if you like being around people - there are many ways to do it, unless you live in a mountain cave or something. In any city there are dozens of volunteering groups that would be very happy if you came over and helped them and you will be around people as well.
But in our internal state of being, in our thoughts, in our emotions and the very experience of life we are always alone. Yes we can try to express them to others some extent, but it can never be complete. In that sense we are fundamentally alone, and realising that makes the problem disappear.
Because if I'm always alone internally and nature made it so, why worry about that? The need for desperate attempts to fix it disappears and you are just fine both ways, when there are people around or when there is no one around.
it's not just conditioning, there is likely some biological drive because we evolved as a social species, but he's right, there is also conditioning and it can be dealt with. there is plenty of people that live in solitude and plenitude because they chose or learned to do so.
we're told that we need connection, but what we seek in others is really ourselves: our meaning, our purpose, we need to matter. what we actually find in others is only the illusion of that. it works (usually) and it feels good but not necessarily for everyone and there are ways to do that all by yourself. just be nice to yourself and enjoy existence. some will contemplate you as a weirdo, but that's their conditioning kicking in. it may not be for everyone, but there's really nothing wrong with that.
i was raised in a crowded family. i had dates and got married and got kids. i have a few friends left, some family left, aquaintances, sport comrades, sporadic contact and interaction with all of them ... but i spend most of my time alone and doing my thing, and rarely get bored, days fly. sometimes i might feel empty, lonely, depressed ... well then i reach out, or just soldier on, or distract myself, i know it will pass. and i think everyone has such moments, i had them all my life, being permanently crowded just distracts you from that. all in all, looking back, i'm having the blast of my lifetime and this is how i want to live the rest of it.
Fundamentally, not enough. Linux's default security mechanisms are simply too weak for something as potentially hostile as a mobile device. Firejail is a good start, but proper user isolation as Android does is the right solution (each app is a different user, and accessing their data/user data is only done through Providers, or IPC), and anything else is naively trusting and not enough, no matter how many layers of sandboxing and suid-ing you do. Doubly so when all of its apps are written in C++. Can't wait to deal with use-after-free on my mobile device.
In addition, its compatibility with android apps is also chains: why would I bother developing for sailfish (especially since it involves Qt / Qt Creator) when I can just develop an Android app, and say it'll run well enough (unless it needs play integrity, which is the same problem, or somehow falls behind in android/androidx compatibility)
> Linux's default security mechanisms are simply too weak for something as potentially hostile as a mobile device.
Linux has SELinux as a default option which Android makes good use of, some forks more than others, and setup correctly it is better than user isolation. You could also recreate the protection user isolation provides through policy alone.
It is _the_ 2FA device. from SMS, to authenticators, to password managers, etc. It also has access to all of your personal information, your pictures, your contacts, your email. It actively receives notifications and messages from the outside world, from potentially any sender. It's connected through WiFi, GPS, 5G, bluetooth, UWB, every possible connection system imaginable. It can listen to your phone calls, read your text messages, interact on your behalf with pretty much everything in your life, and is a single facial recognition away from automating emptying your bank account. Not to mention the fact that mobile software does tend to want to at least survive a little bit when offline, so plenty of data is stored locally.
It's a key to your life. The perfect target for any attacker.
My Linux laptop is my 2FA device (email), it holds my passwords, and personal data like photos, contacts, email. It receives notifications and messages from outside world from potentially any sender. It connects through Wi-Fi, Bluetooth, Ethernet, 5G (built in WWAN). It even has cameras, microphones and I use it for my online banking and shopping. The only reason why smartphones "need" to be ultra secure is because everyone and their mother have one and the truth is most people can hardly tell a difference between their head and their butt.
Well yes. Security measures aren't for the principled tech saavy scene who is up to date on the latest malware and exploits. That's how Apple rose to power; it put convenience first and told users it'd worry about all the privacy stuff for them.
A bit contradictory, but that's what the people want. They (as a mass) always choose convenience over both freedom and security. So that's why we always converge towards a centralized power, in tech and the larger world.
Because regular users (non-techies) install all kinds of apps on their phones, from all kinds of sources/vendors, but not on their desktop. Most people use only a handful of applications on their desktop (browser, office suite, …) but they have dozens if not hundreds of different apps on their phone.
There’s no use arguing. As the ancient Lisp proverb says, when the programmer is ready, the parens will disappear. Until then, you’re just wasting your breath.
No because the syntax is so awful. Programming languages are consumed by machines but written by humans. You need to find a middle ground that works for both. That's (one of the reasons) why we don't all program in assembly any more.
Lisp and similar are just "hey it's really easy to write a parser if we just make all programmers write the AST directly!". Cool if the goal of your language is a really simple parser. Not so cool if you want to make it pleasant to read for humans.
I've never used a Lisp either, but I get the impression that "forcing you to write the AST" is sort of the secret sauce. That is, if your source code is basically an AST to begin with, then transforming that AST programmatically (i.e. macros) is much more ergonomic. So you do, which means that Lisp ends up operating at a higher level of abstraction than most languages because you can basically create DSL on the fly for whatever you're doing.
That's my impression, at least. Like I said, I've never actually used a Lisp. Maybe I'm put off by the smug superiority of so many Lisp people who presume that using Lisp makes them better at programming, smarter, and probably morally superior to me.
> Lisp and similar are just "hey it's really easy to write a parser if we just make all programmers write the AST directly!".
Its not just that, it makes syntax more uniform and it allows adding all sorts of features using the same parens syntax where other languages have to invent all sorts of special symbols to distinguish between things.
It makes it easier to parse for both machines and humans. This is why I asked if you ever wrote in lisps, because it takes some time to adjust but once you do it all makes sense.
Many languages tried to make programming look like a human-readable text, they all failed in one way or another. Because writing program instructions requires specific structure and s-expressions do that extremely well.
As someone who writes a lot of Scheme, I agree that the math syntax is not good. There have been proposals to add infix expressions (https://srfi.schemers.org/srfi-105/) but nobody seems to want them, or can agree on specifics.
However, code that is mostly function calls is fine for me, since those would have parentheses anyways in C++/Rust/whatever. In that case it makes the language more regular, which is nice for writing macros.
Earlier last year, I "quietly" introduced an infix support macro into TXR Lisp.
I devised a well-crafted macro expansion hooking mechanism (public, documented) in support of it.
It works by creating a lexical contour in which infix expressions are recognized without being delimited in any way (no curly brace read syntax translating to a special representation or anything), and transformed to ordinary Lisp.
A translation of the FFT routine from Numerical Recipes in C appears among the infix test cases:
The entire body is wrapped in the (ifx ...) macro and then inside it you can do things like (while (x < 2) ...).
In completing this work, I have introduced an innovation to operator precedence parsing, the "Precedence Demotion Rule" which allows certain kinds of expressions to be written intuitively without parentheses.
This view is false because what is hard to parse for machines also presents difficulty for humans.
We deal with most languages (Lisp family and not) via indentation, to indicate the major organization, so that there isn't a lot left to parse in a line of code, (unless someone wants to be "that" programmer).
> This view is false because what is hard to parse for machines also presents difficulty for humans.
Yes definitely to some extent, but they aren't perfectly aligned. Most languages make things a bit harder to parse for machines but easier for humans. Some get it wrong (e.g. I would say OCaml is hard to parse for humans, and some of C's syntax too like the mental type declaration syntax). I don't think you could say that e.g. Dart is harder to parse for humans than Lisp, even though it's clearly harder for machines.
If they were perfectly aligned, you could easily parse 128 levels of parentheses without any line breaks or indentation.
Every rule for inferring a hiddnen, implicit structure within a sequence of tokens makes things harder, even on the scale of just a few tokens in one line within an indented structure.
I'm assuming the website is written in Loon and according to roadmap its version 0.4 and compilation is planned in 0.7. So it demonstrates that the language works, but its not optimised yet.
exactly! I didn't post this (thank u whoever did though) so wasn't ready to launch yet. but the idea is it will SSR and hydrate each page. I want to pull it all out into a framework congruent to Next.js
Loon looks very cool, I didn't expect something like that to appear at all. Last time I was so excited was about Red a few years ago, but it doesn't seem to go anywhere.
These don't contradict each other though, you could "blatantly plagiarize someone else work" before as well. LLMs just add another layer in between.
reply