Skip to content
#

tokenizer

A grammar describes the syntax of a programming language, and might be defined in Backus-Naur form (BNF). A lexer performs lexical analysis, turning text into tokens. A parser takes tokens and builds a data structure like an abstract syntax tree (AST). The parser is concerned with context: does the sequence of tokens fit the grammar? A compiler is a combined lexer and parser, built for a specific grammar.

Here are 623 public repositories matching this topic...

AccessViolator
AccessViolator commented Feb 24, 2020

Everything in diagrams.css should be scoped to some wrapping css class, because as is it cannot be bundled with the rest of an app's css because of styles like this:

div {
    -webkit-touch-callout: none;
    -webkit-user-select: none;
    -khtml-user-select: none;
    -moz-user-select: none;
    -ms-user-select: none;
    user-select: none;
}

svg {
    width: 100%;
}

Curre

Ekphrasis is a text processing tool, geared towards text from social networks, such as Twitter or Facebook. Ekphrasis performs tokenization, word normalization, word segmentation (for splitting hashtags) and spell correction, using word statistics from 2 big corpora (english Wikipedia, twitter - 330mil english tweets).

  • Updated Feb 8, 2021
  • Python
SteelPhase
SteelPhase commented Nov 26, 2019

Would it be possible to have the regex parser support character classes like \w within other character classes? I had a regex pattern earlier that used the character class [0-9a-zA-Z_\.-], and I attempted to simplify it with [\w\.\-]. I didn't notice this library doesn't support doing that, and was wondering just how difficult that would be to implement. For the time being i'm just expanding