Creating Custom Tokenization Models with Sonnet Tokenization Engine

Sonnet Tokenization Engine 1.1.0 includes the ability to train custom token models from your text. Using your own token model provides improved performance because the model will more closely match your text to be tokenized. This post describes how to launch an instance of Sonnet Tokenization Engine on AWS, connect to it, train a custom token model, and then use it.

To get started, let’s launch an instance of Sonnet Tokenization Engine from the AWS Marketplace. On the product page, click the orange “Continue to Subscribe” button.


On the next page, we highly recommend selecting a VPC from the VPC Settings options. This is to allow you to launch Sonnet Tokenization Engine on a newer instance type. Select your VPC and a public subnet.

Now, select an instance type. We recommend a t2.micro for this demonstration. In production you will likely want a larger instance type.

Now click the “Launch with 1-Click” button!

An instance of Sonnet Tokenization Engine will now be starting in your AWS account. Head over to your EC2 console to check it out. By default, for security purposes port 22 for SSH is not open to the instance. Let’s open port 22 so we can SSH to the instance. Click on the instance’s security group, click Inbound Rules, and add port 22. Now let’s SSH into the instance.

ssh -i keypair.pem

Sonnet Tokenization Engine is installed under /opt/sonnet.

cd /opt/sonnet

Training a custom token model requires training data. The format for this data is a single sentence per line with tokens separated by whitespace or <SPLIT>. You can download sample training data for this exercise.

wget -O /tmp/token.train

We also need a training definition file. Again, we can download one for this exercise:

wget -O /tmp/token-training-definition.xml

Using these two files we are now ready to train our model.

sudo su sonnet
./bin/ /tmp/token-training-definition.xml

The output will look similar to:

Sonnet Token Model Generator
Version: 1.1.0
Beginning training using definition file: /tmp/token-training-definition.xml
2018-03-17 12:47:46,135 DEBUG [main] models.ModelOperationsUtils ( - Using OpenNLP data format.
2018-03-17 12:47:46,260 INFO  [main] training.TokenModelOperations ( - Beginning tokenizer model training. Output model will be: /tmp/token.bin
Indexing events with TwoPass using cutoff of 0

	Computing event counts...  done. 6002 events
	Indexing...  done.
Collecting events... Done indexing in 0.54 s.
Incorporating indexed data for training...  
	Number of Event Tokens: 6002
	    Number of Outcomes: 2
	  Number of Predicates: 6290
Computing model parameters...
Performing 100 iterations.
  1:  . (5991/6002) 0.9981672775741419
  2:  . (5995/6002) 0.9988337220926358
  3:  . (5996/6002) 0.9990003332222592
  4:  . (5997/6002) 0.9991669443518827
  5:  . (5996/6002) 0.9990003332222592
  6:  . (5998/6002) 0.9993335554815062
  7:  . (5998/6002) 0.9993335554815062
  8:  . (6000/6002) 0.9996667777407531
  9:  . (6000/6002) 0.9996667777407531
 10:  . (6000/6002) 0.9996667777407531
Stopping: change in training set accuracy less than 1.0E-5
Stats: (6002/6002) 1.0
Compressed 6290 parameters to 159
1 outcome patterns
Entity model generated complete. Summary:
Model file   : /tmp/token.bin
Manifest file : token.bin.manifest
Time Taken    : 2690 ms

The created model file and its associated manifest file will have been created. Copy the manifest file to Sonnet’s models directory.

cp /tmp/token.bin.manifest /opt/sonnet/models/

Now start/restart Sonnet.

sudo service sonnet restart

The model will be loaded and ready for use. All API requests for tokenization that are received for the model’s language will be processed by the model. To try it:

curl "" -d "Tokenize this text please." -H "Content-Type: text/plain"

Leave a Reply