Another great example is how Claude is helping Mozilla find zero day exploits in Firefox, by the hundreds, and ranging from minor to CVE level, for over a year:
I think the Mozilla example is a good one because its a large codebase, lots of people keep asking "how does it do with a large codebase" well there you go.
The problem is that you have all kind of "security spam" in the same way that social media is flooded by automatic, but on-topic, content. This doesn't mean that some very few reports are not correct.
One way to filter that out could be to receive the PoC of the exploit, and test it in some sandbox. I think what XBOW and others are doing is real.
What is great about the Ohm approach compared to typical lex/yacc/ANTLR parsers is that it avoids ambiguity by using ordered choice (the first matching rule wins), instead of requiring you to resolve conflicts explicitly. This makes working with Ohm/PEGs less painful in the initial phase of a project.
It's also important to highlight that this makes the parsing process slower.
> it avoids ambiguity by using ordered choice (the first matching rule wins)
PEG parsing tool authors often say that ordered choice solves the problem of ambiguity, that's very misleading.
Yes, ordered choice is occasionally useful as a way to resolve grammatic overlap. But as a grammar author, it's more common for me to want to express unordered choice between two sub-grammars. A tool that supports unordered choice will then let you know when you have an unexpected ambiguity.
PEG-based tools force you to use ordered choice for everything. You may be surprised later to find out that your grammar was actually ambiguous, and the ambiguity was "resolved" somewhat arbitrarily by picking the first sub-grammar.
> This makes working with Ohm/PEGs less painful in the initial phase of a project.
I do agree with this. But then what happens in the later phases? Do you switch to a tool that supports unordered choice to see if you have any ambiguities? And potentially have to change your grammar to fix them?
Now I don't know what to think. The author's got a ton more experience than me. It seems there's a big enough market out there for people wanting non-ambiguity proofs and linear running-time proofs.
Then again, the more I think about parsing, the more I think it's a completely made-up problem. I'm pretty sure there's a middle ground between Lisp (or even worse, Forth) and Python. Fancy parsing has robbed us of the possibilities of metaprogramming and instead opened up a market for Stephen Wolfram, whose product features a homo-iconic language.
I've been gorging on Formal Language Theory literature recently. I am now fully convinced that Regular Languages are a very good idea: They are precisely the string-search problems that can be solved in constant space. If you had to find occurrences of certain patterns in a massive piece of text, you would naturally try to keep the memory usage of your search program independent of the text's size. But the theory of Regular Languages only actually gets off the ground when you wonder if Regular Languages are closed under concatenation or not. It turns out that they are closed under concatenation, but this requires representing them as Non-Deterministic Finite-State Automata - which leads to the Kleene Star operation and then Regular Expressions. This is a non-obvious piece of theory which solves a well-formulated problem. So now I suspect that if history were to repeat again, Regular Expressions would still have been invented and used for the same reasons.
By contrast, I find Context-Free Grammars much more dubious, and LR almost offensive? The problem with LR is I can't find a description of what it is that isn't just a gory description of how it works. And furthermore, it doesn't appear to relate to anything other than parsing. There's no description anywhere of how any of its ideas could be used in any other area.
The issue with Regex for parsing is it can't handle balanced parentheses. https://en.wikipedia.org/wiki/Regular_expression. More generally, they can't handle nested structure. Context free grammars are the most natural extension that can. It adds a substitution operator to Regex that makes it powerful enough to recognize nested structure. So, Regex would be reinvented if history was rerun, but so would Context Free Grammars. Part of the complexity in parsing is attaching semantic meaning to the parse. Regex mostly avoids this by not caring how a string matches, just if it matches or not.
Now, I do agree that LR grammars are messy. Nowadays, they have mostly fallen from favor. Instead, people use simpler parsers that work for the restricted grammars actual programming languages have.
IIRC there is some research into formalizing the type of unambiguous grammar that always uses () or [] as nesting elements, but can use Regex for lexing.
I understand what a CFG is and why Dyck's language (matching parens) is not a regular language. My point was that CFG/CFL is less motivated by a reasonable and uniquely characterising constraint - such as making memory usage independent of the size of an input string - than regex is.
Then again, you are right that CFGs are very natural. And they do admit a few easy O(n^3) parsing algorithms, like Earley and CYK.
I think your last sentence relates to Visible Pushdown Grammars. See also Operator Precedence Grammars.
I remember that Matteo Realdo Colombo (1515-1559) [1] described the clitoris. There is a novel about the story [2] which was the finalist on one of the top Spanish literary prizes [3].
>Many of the contributions made in De Re Anatomica overlapped the discoveries of another anatomist, Gabriele Falloppio, most notably in that both Colombo and Falloppio claimed to have discovered the clitoris.
LinkedIn already employs anti-scraping measures, so I'd expect a lot of users to get flagged.
That's not unique to LinkedIn but what is somewhat unique is the strong linkage to real world identities, which raises the cost of Sybil attacks on personal networks with high trust.
I checked out and BB revenue report [1] "driven by upbeat strength across its QNX division" (quote). I think this will make more difficult to obtain a QNX license if they don't see it strategic?
BB is a very traditional company, they're very much stuck in the 90's when it comes to software licensing. There is a BB QnX team member here on HN they can probably shine much more light on this than I can (but I realize they may not be free to talk about it).
reply