It seems nearly nothing can be taken for granted ;).
It's best to just run against a big set of APIs and fix issues as they
arise though.
More flexibility means more maintenance, after all.
... it shows that the override I used previously won't work for `admin`.
Therefore we have to keep the actual value, instead of degenrating it.
Makes sense ... it's interesting how much one tends to hard-code things
to work just for a few cases, unless you opt in to see the whole picture
This file is completely generated, and allows us to easily bring in
new versions after each json update.
To make that work, we simple merge all data handed to mako-render,
inside of it. That way, we can put 'api/list' data in any yaml.
However, this also means we need recursive builders, which is tottally
unsupported for now ... .
This means we have to generalize rbuild generation ... could be easy.
Lets see
That way, we make retrieved tokens independent of the order scopes
were passed in. Additionally, we can pass any scopes, just in case
someone uses one token for multiple APIs.
Let's keep it flexible.
Just to have another, different set of api information to deal with,
and not accidentally hard-code things to work with youtube only.
Prepared dealing with media uploads, and it turns out to be best to
adjust the 'doit()' to take the respective type parameter.
We also have to think about downloads, like the ones for google drive,
which requires custom query parameters.
This includes descriptions, of course, and generally seems to look
quite neat. For now, we brutally consume all input to own it,
but in future we might be able to put in Borrow to support them all.
Everything we have, feature wise, is now documented in a first version
at least.
We shall keep this uptodate with what we are implementing, which also
helps figuring out a good api.
That way, we have a common library to pull in from the main repository,
and a space for testing new code (in a partial implementation).
Next there will be generated object structures.
That way, the makefile doesn't need to know that much anymore, and
gets simpler/less verbose.
\# Also
* Added filters for rust doc string
* fixed .PHONY
Namespaces can exclusively be used during rendering, which is fine if
you remind yourself of the newline rules.
However, I also need some utiltiies that convert input data. These
are now within their own libraries, which can be used from python blocks
like the ordinary python functions they are.
Quite neat.
In future, most of the functionality will be in separate namespaces,
the top-level will just assemble the main library file, usnig the
provided %defs. That way, the main file is kept clean.
It's quite final, and super easy to change and to read.
It seems we want to use namespaces/shared implementations soon to allow
using defs. In our case, we transform the version in a particular way,
which is easy enough, yet I'd like to use it to make the system more
powerful.
That way, we read the data files only once, but produce all the outputs
we need. Together with a powerful makefile, we have a multi-invocation
with proper depedency tracking.
Everything will be regenerated though, even though just a single input
template file changed.
The alternative would be to have one dependency and invocation per
input dependency, but that will read the entire json each time.
Let's see what's faster/more useful during development.
Now we can write mako templates, with a similar feature set as
pyratemp. Except that its syntax is nicer, allows to do everything
and that there is syntax highlight support.
Let's see how it fares
After minor modifications to pyratemp, it certainly does the job.
What it **does NOT** do:
* multiple outputs per template/command invocation
* NICE embedding of code (like GSL can)
It will do the job nonetheless, but mako might be worth a look
As GSL failed in my first attempt to get the example program going,
it might be better to try something else before too much time is spend.
Fortunately, pyratemp **seems** to be something usable, and even if not,
it might be possible to make it usable as it's just a 'simple'
python script that I might be able to understand, if need be.
Added all prerequisite programs in binary for easier use.
Make is now implemented top-level, and is not expected to do too much
work actually. It will, however, keep track of all required
gsl invocation and make sure calls are efficient by not having
to rebuild everything every time. That's what make does, anyway ;)
That way, it will remain clearly documented how to do this, and allow
for efficient calling of gsl as well, at some point.
Of course it will be a little more difficult for us to know all
dependencies, but gsl could generate these as well for us, I suppose.