A lot of the comments here suggest that some of the functionality can be replaced by extensions like vimium-ff [1]. If you only compare the list of features then maybe. If you try using the extensions, you immediately notice that the keybindings don't work outside of fully loaded page, have horrible lags and delays and misses a keyboard event from time to time.
For the curious folk, the GitHub page [2] seems to do a better job at telling the how of things.
My question is why Lisp for scripting? What's the technical reason behind choosing it? Or is it just personal preference? I use emacs myself and my only complaint is elisp. In this time and age it looks very alien-ish to most people and judging by my colleagues, It can be a major deterrent from using emacs.
Thank you for the kind words. You've nailed it: we can only do so much with typical extensions, they are limited. As per your quesiton: I'll do my best to answer.
1. Lisp is what enables any part of Nyxt to be reprogrammed at any time (even whilst running).
2. Lisp enables us to easily program DSL's for different bits of functionality. For example, we could easily write/interpret a DSL for describing uBlock matrix rules.
3. We try to insulate users who don't know Lisp a little bit by providing graphical configuration interfaces (like emacs), we are working on improving these to make them almost as effective as writing your config by hand.
We're working on it. Big projects like this take time. Hopefully with time our vision will crystallize into something more obvious :-)
You are being disingenuous here. Lisp is cool but it does not enable "any part of Nyxt to be reprogrammed at any time".
Your project is essentially a thin UI veneer on top of C browser engines. That latter part, which is by far the most interesting, can not be reprogrammed in Lisp except through the very limited exposed C library API. Which leads me to the issues I see with Nyxt:
Security will always be a serious problem (and getting worse as time flows). You don't have the resources of Google or even Mozilla, that ship self-contained browsers and can implement security solutions in a holistic, systemic manner. Doing browsing with the isolated engine libraries was (and still is) a recipe for disaster.
Then, for the same reasons, the big corporations can essentially kill your project at any point by treating it as unsafe/insecure and having their sites stop working (which is actually happening at Google).
But most important to me, other than security, is the lack of flexibility and power. The UI veneer and subsets of the rendering engine library APIs that you expose as programmable in Lisp, are the least interesting ones. Why would anyone want to reimplement uBlock or other popular extensions in some convoluted mix of Lisp and Javascript, rather than use the superior originals? This is the final tombstone for me.
It's clear that you're trying to build an ecosystem and deliver "apps" on top of, like an Emacs-like for the web maybe, but for the reasons I mentioned I see it as fundamentally flawed and want no part of it.
A welcome critique, though unfortunately mostly incorrect :-D! I'll address some of your comments:
1. Nyxt is entirely written in Common Lisp, so yes, any part of it can be reprogrammed at any time. All of our FFI bindings are also written in Common Lisp (https://github.com/atlas-engineer/cl-webkit). In fact, you can even GENERATE bindings at run time. So it is irrelevant what part is invoking C, it is still fully funcall'able at runtime. This is what makes Nyxt not a 'thin veneer', but rather a deep integration which exposes all resources to the end-user (something unique to Nyxt).
2. Our project is a chrome that is agnostic of the renderer engine. We can use both WebKit and Web Engine (Chromium). This makes us resilient to renderer specific problems. If websites decide to ban browsers that utilize WebKitGKT+, we'll have another renderer available to us. We talk about this in our article where we justify some of our technical design decisions: https://nyxt.atlas.engineer/article/technical-design.org
3. Security is very important to us. We rely on upstream providers of web engines (WebKitGTK+, Qt WebEngine) to test and audit secure web engines for us. We can't do everything, you're right about that. For this reason, we give users the choice, and hope for the best!
4. "Lack of flexibility and power"- I think this point is probably the most inaccurate. If you look through our articles you'll see a couple of things that make Nyxt powerful and flexible.
4.A. Composability, all extensions (plugins, as you may call them) can call and utilize each other. That means, as our ecosystem grows, so does the power grow exponentially!
4.B. Flexibility: We give the user access to the complete Common Lisp compiler, all packages on Quicklisp, and the ability to override and change literally any aspect of Nyxt. I often speculate that Nyxt is potentially the most flexible browser to have ever existed.
5. Lastly, no, we are not trying to deliver "apps" on top of Nyxt. Nyxt is a browser, and a programmable platform, not an app delivery mechanism.
I hope that the above was informative, and if not changed your mind, at least exposed you to our view point!
Once again, thanks for the critique, and happy hacking!
1. You're arguing semantics in a disingenuous fashion which reinforces my point and make me like Nyxt even less. Nyxt, the end-user product, "is not entirely written in Common Lisp". Your rendering and browsing engine, the core of your application, Webkit, is not a Lisp library.
2. This doesn't solve the problem. Being able to use both webkit and web engine in no way makes you resilient to the issue I brought up. Google has plenty of reasons to disallow and prevent their engine to be used by Nyxt and they have said that they will do so. Your answer strengthens my point.
3. Again, the issue is systemic and there's nothing upstream providers of web engines can do. Using these libraries in isolation will always expose you to significantly more security issues than using them as part of the browsers they were meant to be used in.
4. Flexibility and power (as I explained) is me using uBlock origin and coutless other extensions without reimplementing them from zero. Do you have uBlock origin for Nyxt? I rest my point.
I definitely agree. One thing to note about Kinesis Advantage 2 is that it is really massive compared to other keyboards. I didn't realize before getting one for myself and seeing it on my desk.
Second thing is that quite frankly at that price point Advantage looks a and feels a bit flimsy (especially the middle part between the key wells) though it has seemingly no impact on typing.
And lastly, I have a bit broader shoulders than average and I need hands a bit more apart than the key wells on Kinesis and maybe a bit more tilted sideways (well at least I think, I am not a doctor). It is orders of magnitude better than it used to be on a regular keyboard, but I still have sore muscles around shoulder blades and back of my neck sometimes.
All of these issues seem to be solvable by a split. I still hope that Kinesis would release a split concave keyboard by themselves since among all of the concave keyboards it is by far the most user friendly (no assembly required, no QMK compiling, remapping is doable on the keyboard itself), but I am starting to oogle the likes of Dactyl, Dactyl-ManuFrom or Bastyl[1] (which is modified Dactyl with less DIY required for more $$).
If I go the DYI way I am also thinking about mounting trackball on it similar to the Tracktyl (trackball helped me immensely with my wrists). What I am a bit afraid of is that once I start doing customizations I'll need/want something easily modifiable/adjustable to experiment with quickly. This design [2] in combination with Bastyl-like flexible PCBs seems like a neat idea I'd like to try.
Be not mistaken, Advantage is, not entirely unlike a good office chair or a mattress, still worth every penny, but IMO the split concave keyboards are the "future future". I would not even think of building a keyboard before Advantage, but now I do and there is no turning back for me.
> This is a core feature of every ad platform I've seen and is absolutely not a violation of the GPDR.
I agree with you on this part. It is not a violation of GDPR on the ad platform side since you, as the data controller, are responsible to obtain a permission from the end-user. The ad platform is a data processor defined under GDPR. I am sure that the agreement between you and the ad platform is stating that you have a permission to use the email addresses for targeted advertising purposes and bear the full legal responsibility if not.
> since users are giving consent when they signup.
See Nextgrids comment. Yes, the GDPR admittedly lacks on the enforcement side and yes, I agree that this is a common practice, but that does not make it legal. Not for a data subject residing in the EU.
I wouldn't say that I am great at stats either, but AFAIK this is mostly because of super high tail-latencies (usually the 99th percentile, commonly caused by garbage collection or cache misses) or because the latencies have multimodal distribution [1] (e.g. where you have a request fast path and slow path in the monitored system and so the latencies "group" around multiple points).
The average latency does not tell us much since it does not really represent anything meaningful - i.e. it is not the latency of typical user/request as you might expect since the average is being skewed by the extremely high tail-latencies [2] or by the multiple modes (the peaks in latency histogram). The typical latency would be more likely better represented by median latency (not in the multimodal distribution case afaik).
As for why not go just go with median latency: you usually need to make multiple requests in parallel and end up waiting for the slowest request of the group. The 95th, 98th or 99th percentile is commonly used to cover for this (sorry can't find a suitable reference). This is also preferable in case of the multimodal distribution (well at least for monitoring and/or general performance diagnosis purposes since you usually care about the worst/tail cases).
Nice summary, thanks. Do you think, that there is a chance to get Google Hangouts running with microG? It's required by my current employer unfortunately.
Try it. Once you have TWRP on your phone, you can backup your whole phone (including ROM) to SD card and later revert it back if it doesn't work.
I can't tell without installing and I have strict policy of no google / facebook (actually anything related with social media) apps on my phone. Or try to find OSS alternative
I was having the same issue. If you are fine with something more homegrown you can use gplaycli[1] to download the apks directly from Google Play. I use it in combination with rsync, but it should be possible, at least in theory, to host your private f-droid repository with the downloaded apks.
Unfortunately, this does not solve verification of the apk signature. As far as I understand it, Android uses something similar to "trust on first use" [2] with apk signatures, so verifying the signature before first installation should be sufficient for most people.
What caught my eye is, if you scroll down long enough on dedicated servers page [1], you will eventually see GaaS - "Geek As A Service". This made me giggle far more than I am wiling to admit :)
I just compared both services to find a new VPS provider and decided in favour of Hetzner. Why do you think online.net looks better?
My main problem with online was the poor IOPS performance due to non-local SSDs. But maybe my test was just on the wrong node?
I think that this is pretty much what IPFS [1] (and others I cannot remember right now) does. If I remember correctly, it builds on some of the BitTorrent ideas, but does not implement whole BitTorrent protocol.
It might be interesting to integrate IPFS into standard BitTorrent clients.
IPFS is most definitely the long term solution, but I'm now of the opinion that in order to bridge the gap, IPFS needs to incorporate the Bittorrent protocol so you're serving IPFS from "supernodes". These nodes would abstract away Bittorrent and IPFS to the underlying data, since its SHA hashes all the way down (both entire objects, and chunks of the object).
Think of how S3 can serve content either via HTTP or via torrent for each object. Same idea, except with a distributed announcement/hash table.
I'm of the same opinion. It's much easier to gain mindshare by abstracting over the status quo and swapping out the innards than by starting from scratch with better tech but no existing culture around it.
I am glad to see this on front page of HN! Of course I use it mainly for additional web browser sandboxing, but it has some useful "side-features" that are really convenient - namely overlayfs/overlaytmpfs. This is great when I test new software and I don't want to clutter my home directory.
Disclaimer: I am just a satisfied user :)
EDIT: Looks like it now fully supports X11 sandboxing.
A lot of the comments here suggest that some of the functionality can be replaced by extensions like vimium-ff [1]. If you only compare the list of features then maybe. If you try using the extensions, you immediately notice that the keybindings don't work outside of fully loaded page, have horrible lags and delays and misses a keyboard event from time to time.
For the curious folk, the GitHub page [2] seems to do a better job at telling the how of things.
My question is why Lisp for scripting? What's the technical reason behind choosing it? Or is it just personal preference? I use emacs myself and my only complaint is elisp. In this time and age it looks very alien-ish to most people and judging by my colleagues, It can be a major deterrent from using emacs.
[1]: https://github.com/philc/vimium
[2]: https://github.com/atlas-engineer/nyxt