That way, it's easy to obtain the respective precompiled binary, as
well as seeing the source-code.
Overall, it makes promoting the tools easier as the CLI docs can be
linked directly.
Closes#108
[skip ci]
All assets are configured via shared.yaml and are located elsewhere in
the web. This could lead to broken assets at some point, but I am just
risking it for know, knowing that it's easily done to have local
resources.
Closes#106
[skip ci]
Previously we would define information about the program types
in two places, once for the index, and once per program type.
Now within the index.html, we just load the respective program type
information to have access to the latest at all times.
Closes#104
[skip ci]
That way, we can provide better service, as CLIs that consume a lot of
quota can easily have their own app credentials, and with it, their
own quota.
The fallback will be a project that allows to use all possible
google APIs.
The user can always put in his own application secret to use his own
quota or even paid services.
Fixes#80
We are parsing required scalar values and handle parse-errors correctly,
to the point were we make a simple, non-upload doit() call.
It shows that we seem to build invalid calls, for now,but that's nothing
we can't fix once the time is ripe.
Next goals will be related to finalizing the argument parsing code.
Fixes#60
It can be selected for each type of program we want to build, and makes
sense for everything that is not a library.
We also tried to unify names and folders a bit more, even though there
certainly is more work to be done to be fully non-redundant.
Fixes#43
This allows us to build efficiently. CLI programs can now have their
own cmn.rs implementation, which we can test standalone with
`cargo test`.
The primary makefile currently just explicitly pulls in the type-*.yaml,
one day we could possibly put it into a loop.
Fixes#11
This is the first of many changes to come.
We try to leverage our ability to merge multiple data source into one
to abstract away what we are actually doing, and of course, to allow
sharing the majority of the code, were applicable.
Previously, it would query the size from the wrong dict and obtain
the value 0 all the time. This would have made every upload fail with
`UploadSizeLimitExeeded`.
Now we obtain the actual size limit, and will ignore it if unset/0
for some reason.
Patch += 1
This caused cargo on a case-sensitive file-system not to find the
cargo file, which made it to look upwards in the directory structure
to find the correctly named Cargo.toml fo the 'cmn' development
project.
It seems nearly nothing can be taken for granted ;).
It's best to just run against a big set of APIs and fix issues as they
arise though.
More flexibility means more maintenance, after all.
... it shows that the override I used previously won't work for `admin`.
Therefore we have to keep the actual value, instead of degenrating it.
Makes sense ... it's interesting how much one tends to hard-code things
to work just for a few cases, unless you opt in to see the whole picture
This file is completely generated, and allows us to easily bring in
new versions after each json update.
To make that work, we simple merge all data handed to mako-render,
inside of it. That way, we can put 'api/list' data in any yaml.
That way, we make retrieved tokens independent of the order scopes
were passed in. Additionally, we can pass any scopes, just in case
someone uses one token for multiple APIs.
Let's keep it flexible.
Just to have another, different set of api information to deal with,
and not accidentally hard-code things to work with youtube only.
Prepared dealing with media uploads, and it turns out to be best to
adjust the 'doit()' to take the respective type parameter.
We also have to think about downloads, like the ones for google drive,
which requires custom query parameters.
This includes descriptions, of course, and generally seems to look
quite neat. For now, we brutally consume all input to own it,
but in future we might be able to put in Borrow to support them all.
Everything we have, feature wise, is now documented in a first version
at least.
We shall keep this uptodate with what we are implementing, which also
helps figuring out a good api.
That way, we have a common library to pull in from the main repository,
and a space for testing new code (in a partial implementation).
Next there will be generated object structures.
That way, the makefile doesn't need to know that much anymore, and
gets simpler/less verbose.
\# Also
* Added filters for rust doc string
* fixed .PHONY
It's quite final, and super easy to change and to read.
It seems we want to use namespaces/shared implementations soon to allow
using defs. In our case, we transform the version in a particular way,
which is easy enough, yet I'd like to use it to make the system more
powerful.
That way, we read the data files only once, but produce all the outputs
we need. Together with a powerful makefile, we have a multi-invocation
with proper depedency tracking.
Everything will be regenerated though, even though just a single input
template file changed.
The alternative would be to have one dependency and invocation per
input dependency, but that will read the entire json each time.
Let's see what's faster/more useful during development.
After minor modifications to pyratemp, it certainly does the job.
What it **does NOT** do:
* multiple outputs per template/command invocation
* NICE embedding of code (like GSL can)
It will do the job nonetheless, but mako might be worth a look