[Imps] Test cases for parsing spec
James Graham
jg307 at cam.ac.uk
Thu Dec 7 10:37:20 PST 2006
Sam Ruby wrote:
> James Graham wrote:
>> FWIW, I've started writing tokenizer testcases that are simply
>> one-line eval-able python expressions of the form:
>> [input, expected output, description]
>> e.g. ["<h a='b'>", [["StartTag", "h", {'a':'b'}]], "Start Tag
>> w/attribute"]
>
> s/{'a':'b'}/{"a":"b"}/ and you have http://www.json.org/
OK so based on this, I've moved the small number of existing tests we have to a
json based format like so:
{"tests":
[
{"description":"Test description",
"input":"String to pass to tokenizer",
"output":[expected_output_tokens]}
]
}
Clearly this allows multiple tests per file simply by adding more objests to the
"tests" list. expected_output_tokens is a list of tokens drawn from the
following set, in the order they are produced by the tokenizer:
["DOCTYPE", name, error?]
["StartTag", name, {attributes}])
["EndTag", name]
["Comment", data]
["Character", data]
"ParseError"
"AtheistParseError" <-- this one should perhaps be removed
If anyone has any tokenizer tests they would like to create or contribute, this
format would be ideal for now at least.
--
"Eternity's a terrible thought. I mean, where's it all going to end?"
-- Tom Stoppard, Rosencrantz and Guildenstern are Dead
More information about the Implementors
mailing list