The TOON format looks really promisng for reducing token costs, especially for large datasets. I've been strugglign with the same multi-LLM CLI issue and aichat seems like a solid solution. Curious if you've tested the token savings on real-world workloads yet?
I haven't tested it yet! My personal projects doesn't burn too much cost for me to be worried about that so far but still curious to see if I would see some difference even at lower volume, so will try soon!
TOON loks like an annotated csv.
It's not clear to me why
```
users[3]{id,name,role}:
1,Alice,admin
2,Bob,user
3,Charlie,admin
```
should be better than
`users.csv`
```
id,name,role
1,Alice,admin
2,Bob,user
3,Charlie,admin
```
But maybe it's LLM magic
Mutliple things!
As you can see you have a number next to "users". This one can be nested and put next to other blocks.
That means that for each block we have
-name of the object
-number of rows
-field schema
-you know where the nested fields start/end
The TOON format looks really promisng for reducing token costs, especially for large datasets. I've been strugglign with the same multi-LLM CLI issue and aichat seems like a solid solution. Curious if you've tested the token savings on real-world workloads yet?
I haven't tested it yet! My personal projects doesn't burn too much cost for me to be worried about that so far but still curious to see if I would see some difference even at lower volume, so will try soon!