Decamel and the Colossal Cave
One thing I've noticed, when working in languages sufficiently introspective that you can access names of functions, is that any large enough system ends up implementing a decamel method.
Basically it takes a string, being a method or class name, which (due to limitations language design though based in the days when the only way you'd get timely parsing on the available hardware was to use recursive descent) is MadeOfSeveralWordsUsingCapitalsToIndicateBreaks and break it apart for user presentation.
Normally it's the text of a menu that's late bound to a function call, or the default label of an entity type on the palette of a modelling tool. I've also used it with a unit test framework that output the test names and results in an XML file, and with the magic of Ant, Saxon and FOP presented a properly formatted PDF automated test report for my line manager.
There's an example of an OSS tool on code_poet of it being used for documenting unit tests (though his 'self describing system' label is more than a little off mark - labelling a unit test 'Dog walks when you kick it' doesn't guarantee that the test has anything to do with the label; and as the fact that both his labels are present simple but the test for barking is past perfect and for walking is present progressive indicates, even giving a long, meaningful label doesn't guarantee to uncover the underlying assumptions that make creating good unit tests tricky).
Now, that stuff is very easy, and a very powerful tool for creating agile applications (you don't want a thousand anonymous ActionListeners cluttering up your code - even if they're generated by your IDE, it's all hardwired and a pain to maintain), and maybe a help for readable unit tests.
Something else on my radar, this time via decafbad but also now on LtU, is Inform7, which is creating a bit of a buzz.
Now I've known about controlled English systems such CLCE and ACE, both of which are general purpose mappings of a subset of natural language onto first order logic, and given effort you can create a model that can be transformed into running code. But it seems that the generalness of the AI originated systems gets in the way - or maybe I'm just too lazy to dig into them far enough to get a return on my effort.
It's also true that both CLCE and ACE have been used to create running code for systems specified in natural language, but in my experience there are always problems with general purpose code generators (not that I'm saying anything of the quality of those projects in particular).
So instead of applying the full systems to the general case, what happens if you add a simple adventure game style parser to the decamelled code? Can you generate unit tests like that, or do you need the full linguistic systems to get the assumptions about tense correct?
TME
Labels: programming
0 Comments:
Post a Comment
<< Home