Thanks for sharing the video.
Very interesting.
If the whole thing is too long to watch suggest to start 17:30.
Here is my layman interpretation (disclaimer: I am not by any means knowledgeable in biology, and just wikepidyed most of the terms used):
1) our immune system has T cells that are responsible for
recognizing if an antigen is 'foreign' molecule/substance or something that's generated by our own body (self).
These types of T-cells are called 'Regulatory T-Cells' [1]
2) Cancers, as an example, trick these types of cells to think that the cancerous cells are 'self'
3) For these type of regulatory T-Cells (T-regs) to get activated, they
need 2 signals:
-a- signal that says -- here is a pathogen
-b- signal that says -- this pathogen is foreign (if not, immune system will not get activated).
Toll-like receptors help with -b-.
They work, in a way (this is my analogy, sorry if it is lame), like RegEx (regular expression), that have evolutionary encoded patterns that indicate the origin of a given antigen.
And that's the core of Ruslan Medzhitov's research
4)
So if we get the regex match of the antigen structure (which is being sliced 'prepped up' for the check, by slicing its proteins into peptides) -- the signal 2 is generated, and immune system will activate.
If the match did not happen -- no activation.
5) Another interesting thing, (or may be I misunderstood)
Is that those TReg cells, each do not have 'all the patterns'. Instead, our body generates, randomly, many of those cells, and each one just have one of those 'RegEx'.
So another key, is that our body has to, in parallel, so to speak, apply multiple of them to the foreign body. And so some of those TReg cells would never get activated (as they did not have the pattern to induce the -b- signal). But others, hopefully, would get the match and the -b- signal will happen, and immune system will get activated.
You can also check out shadow-cljs which has great support for React Native development. I've been able to easily keep my project up to date with the latest React Native versions. Plus with shadow-cljs you also get access to all of npm in a very straight forward way.
i'm seconding this. not only react native version that re-natal supports is old but also the re-frame & reagent versions it supports are pretty old. shadow-cljs is really a good solution and you can pick it up for react-native projects too.
PRGMR.com
1.25 GiB RAM, 15 GiB Disk for $5 per month.
https://prgmr.com/xen/
inexpensive.
no overage charges.
can pay by bitcoin.
(and a FreeBSD friendly).
I wonder if you would also benefit from
installing something like yunohost [2] on top of Debian.
This will get you your personal cloud.
And the use YounoHost's 'Custom web app' container [4]
This way, you get an automatically configured web server (nginx), with certificates (via Let's encrypt) that get auto updated.
You also get a pre-configured user/sftp access to your app's folders.
And those folders will be backed up when you hit the backup button...
Nginx will invoke your python app when the predefined URL is hit.
All you have to do is to point your domain registrar to the prmgr VPS host running yunohost.
Are you affiliated with the company? I chuckle when I see this hosting shop pop up on HN. They had advertisements on the dividers at the Safeway in Mountain View. Was wondering how they worked that.
To my best knowledge, I and lsc are the only people
associated with prgmr.com who post semi-regularly on HN. lsc isn't involved much beyond having a minor stake in the company.
lsc set up the Safeway divider advertisements. It was pretty cheap, like $150 a month. I don't think we would do it again, but it certainly caught a lot of people's attention.
I am wondering if D, Nim and Zig would be able to just leverage C++ version of Cap'n proto library directly ?
(I think D has built-in C++ API support across compilers, not sure about the others)
Probably not. Remember that Cap'n Proto (like Protobuf) involves defining protocol schemas in an IDL and then using a code generator to generate classes with getters and setters and such in each language. Programs that use Cap'n Proto often use these generated APIs throughout their codebase. While you could perhaps take these generated classes and wrap them wholesale, there are two big problems with doing so:
1) You end up with APIs that are not idiomatic for the calling language. For instance, D supports properties, where C++ uses separate getters and setters. Also, FFI wrappers tend to add an additional layer of ugliness in order to translate features that don't exist in the calling language. If it were an API you only used in one small part of your code maybe this would be fine, but spread all over your codebase would be awful.
2) The generated getters and setters are designed to be inlined for best performance, but cross-language inlining is often not possible. In fact, most FFI wrappers incur a runtime performance penalty to convert between different conventions, and this penalty is going to be extra-severe when calling functions that are intended to be lightweight.
So this is why I say that the serialization layer -- which includes all this generated code that apps interact with directly -- should be native to the language.
But, you could use the native serialization layer to construct messages, and then pass it off to the C++ RPC implementation. The RPC implementation has a fairly narrow API surface with an extremely complex implementation behind it, so it's a perfect candidate for this.
All the protobuf implementations I've worked with (especially protoc descendents) just feels like they wrapped the C implementation with some FFI and called it a day. They're all ugly and unidiomatic. So it's not exactly a high bar to meet.
For my needs, I'm ignoring the "ugly" bits. I'm looking for statically typed checks - e.g. avoid spelling errors. Also discoverability - e.g. start typing the name of your service press "." and it gives you the options, then Alt+Space and what you can provide - it's really easy with C# And Visual Studio.
I've heard of similar quality issues with other RPC libraries (either Thrift or Avro, I can't remember which). In my cross-language work, everything becomes very functional and non-idiomatic due to the overhead.
got it. thank you for the explanation.
I am planning to add 'multiplayer' feature, where multiple participants needs to quickly exchange positional and surrounding attributes.
Currently system is in Java+JS front end. I feel that JSON serialization that I currently use, is not the right thing..
But at the same time, I care about 'size in kb' of the js front end.
Therefore have been learning the options.
I initially tried sovereign, but once I figured out I had to pay for tarsnap backup service, and that it did not have ansible for nginx setup (I needed that experience for work stuff), I went with Yunohost.
Sofar I am happy with YunoHost and subscribed to send periodic donation to the project.
Overall, though, if you are working with ansible at work, or want to advance in devops field, learning ansible and contributing to Sovereign project would be a good path to take.
Yes, I have been doing same thing, only with LMDB.
I do not think LMDB could load from in-memory only object (as it has to have file to memory-map to), however.
But same design reasons, I wanted something that
a) I can move across host architectures
b) something that can act as key-val cache, as soon as the processes using it are restarted (so no cache hydrating delay)
c) something that I can diff/archive/restore/modify in place
We tested sqllite for the above purpose at the time, and writing speed and ( b ) - lmdb was significantly faster.
So we lost the flexibility of SQLite, but I felt it was a reasonable tradeoff, given our needs.
I also know that one of the Intel's python toolkits for image recognition/ai, uses LMDB (optionally) store images that processing routines do not have incur the cost of directory lookups when touching millions of small images.
(forgot the name of the toolkit though)…
Overall, this a very valid practice/pattern in data processing pipelines, kudos to you for mentioning it.
Oracle has done the same in the last 25 years.
Company I worked for used Oracle database for an enterprise system.
Oracle over the years, acquired some of our competitors.
Then every time customers would need to renew database licenses with Oracle, Oracle's sales reps would try to sell them the competing enterprise systems that they had acquired.
it is complete free, open source, not difficult to use, does not require any additional tools on the target host.
On the control (source) host you just need ansible and target hosts need python3.
And from there you can use this role to push nginx install on 10 or 100 target hosts (with same single command). The hosts do not have to run the same OS (they can run a mix Debian-based, or -redhat based Linuxes, and a FreeBSD.. I think (although did not try that) ).
This role automatically setups and download the appropriate versions of nginx binaries, and allows pretty flexible config of nginx itself, from load balancers to reverse proxies, to plain-jane web servers...
Any passwords that may need to be transmitted to target hosts (eg for cert files), are normally encrypted in crypto-secure ansible vault. So they get transferred with SSH over to the host, without being ever bein stored plain on the source/controller host.
With regards to
> Most automation scripts are used to sell additional products like Ansible so this script avoids that.
I think most open source software, nginx including, can be though of as 'promoting' something or somebody. Just like any other endeavor that meant to be publicly consumed.
Here is my layman interpretation (disclaimer: I am not by any means knowledgeable in biology, and just wikepidyed most of the terms used):
1) our immune system has T cells that are responsible for recognizing if an antigen is 'foreign' molecule/substance or something that's generated by our own body (self).
These types of T-cells are called 'Regulatory T-Cells' [1]
2) Cancers, as an example, trick these types of cells to think that the cancerous cells are 'self'
3) For these type of regulatory T-Cells (T-regs) to get activated, they need 2 signals:
-a- signal that says -- here is a pathogen
-b- signal that says -- this pathogen is foreign (if not, immune system will not get activated).
Toll-like receptors help with -b-.
They work, in a way (this is my analogy, sorry if it is lame), like RegEx (regular expression), that have evolutionary encoded patterns that indicate the origin of a given antigen. And that's the core of Ruslan Medzhitov's research
4) So if we get the regex match of the antigen structure (which is being sliced 'prepped up' for the check, by slicing its proteins into peptides) -- the signal 2 is generated, and immune system will activate.
If the match did not happen -- no activation.
5) Another interesting thing, (or may be I misunderstood) Is that those TReg cells, each do not have 'all the patterns'. Instead, our body generates, randomly, many of those cells, and each one just have one of those 'RegEx'. So another key, is that our body has to, in parallel, so to speak, apply multiple of them to the foreign body. And so some of those TReg cells would never get activated (as they did not have the pattern to induce the -b- signal). But others, hopefully, would get the match and the -b- signal will happen, and immune system will get activated.
[1] https://en.wikipedia.org/wiki/Regulatory_T_cell