I think you’re missing that Natalia’s software is meant to create a new Discourse account for the user. Not just sending instructions, create the account right away. And then post the user’s contribution that was made via Natalia’s software under the name of the user on Discourse, as a new topic. That’s the main functionality.
At that point, the user’s content contribution is in Discourse, the consent funnel is confirmed, and the user has a Discourse account. Everything is “normal” from that point on forward. That includes any kind of data analysis, which would happen with our usual tools inside Discourse. No additional requirements on that front. If my understanding is in fact not what @natalia_skoczylas wants, she can always shout
An additional requirement for the user interface came in by @nadia as a condition for making it useful for (and fundable via) our current H2020 projects. Namely, the process for making a contribution should be a step-by-step process with a clear visual chain-of-steps-style guidance element from the first to the last page of the multi-page form. The reference for that is the “Challenge and Responses” mechanism we had in our old Drupal 7 based platform (you can have a look, it should be still accessible). When used in the H2020 projects, it would be another site with other questions, but the basic step-by-step process and its visual design should be the same. Means: questions should be configurable easily.
So it’s quite well defined already, but of course wireframes or hand-drawn drafts will be good at this stage to really be sure about the common understanding … . @natalia_skoczylas, can you draw the imagined user interface for us? With pen and paper is fine.
That’s what I thought at first too, but then I re-read what has been discussed in the thread and now I’m not too sure. But exactly what you describe is what I outlined in my recommendation on flow above.
My quote from above does not refer to Natalia’s requirements at all, but rather to the NGI Platform (currently on ngi.edgeryders.eu). Full quote:
So that’s just an illustration for what fitting a software into our landscape would look like, based on the only other shared experience that we have for this so far.
Yes, I understand, but since the argument was to extend that tech choice to also cover this use case, I was pointing out that the scope is quite different.
Yes, it would look like a questionnaire, but not to collect typed data to analyze and do statistics about etc… It’s just a way to solve blank page paralysis for new contributors, by guiding their writing as in a written interview.
All of these questions together with their answers would then be combined into a single Discourse post, starting a new topic. Appropriate Markdown formatting would be applied to distinguish questions and answers.
(As always, @natalia_skoczylas shout if I’m going off-track here.)
Ok, with that as a starting point, here’s my proposal for a software architecture that I think we can both live with … I even start to like it.
The minisite will be a single-page application made with a JavaScript framework of @hugi’s choice, and hosted as a compact, static website on the Edgeryders server.
Deployment can be done via git, as done on surge.sh. It is done like that on the NGI Forward platform as well, just that we pull automatically from the Github master branch.
The inisite collects input from the user and talks directly to Discourse by API.
The Discourse API includes some custom API endpoints where necessary, in the same way how we already extended the Discourse API. The same Discourse API key is used to access both the standard Discourse API and our extensions. Extending the API is done with Ruby, directly as part of the Discourse codebase.
Our extensions to the Discourse API are packaged and distributed as a Ruby gem. If possible it would rather be a Discourse plugin, but I think extending the API is beyond what plugins are allowed to do in Discourse.
The extended Discourse API can also include endpoints with cached responses, if there is a need for that.
That kind architecture means doing it the same way how Discourse itself is made, providing a second frontend for the same Discourse API backend. In that way, that’s quite elegant.
Advantages of adding custom API endpoints as extensions to the Discourse API itself:
We avoid duplicating parts of the already existing official Discourse API, but instead use that wherever possible.
We avoid one or two layers of APIs. Every additional layer of API is complexity, and complexity must be avoided at all costs unless it’s better to have it
We build on the work of an extended Discourse API that we already started, rather than starting another one from scratch.
The Ruby code implementing the API runs in the Discourse environment, so it can access all Discourse classes and also the database. For future scenarios, database access (via Ruby Active Record of course) can speed up queries for data analysis and visualization, for example, avoiding one use case for cached responses of the standard Discourse API.
That also applies when extending the Discourse API as outlined above. In fact we have a start of an Edgeryders API that extends Discourse already, so let’s proceed there.
Proposed work distribution:
@natalia_skoczylas, @hugi and me do the software design together.
Ok, I like this a lot. I didn’t know that you had extended the Discourse API and packaged that into a gem. That’s quite elegant indeed. But how would you solve the issue that we don’t want the static front end site to hold any API keys? How I do things now is that I let the Participio cache API have the Discourse API key. Having a cache avoids the issue that in some cases the rate-limiting from the open API might make loading a minisite very slow if it, for example, loading posts from 3 categories, 2 tags and also getting the full content of 6 posts all at the same time. If the user clicks reload a few times, they can end up not getting any content at all before the rate limit is dropped again for their IP. And having an API key means that we can load content from protected “web content” categories. Would it be possible to build something like that into what you are proposing?
I agree to this, and I would probably check with Owen to see if he wants to implement the design on the front end with me. However, as I mentioned, I won’t be available at all until the beginning of August.
Probably the same way how Discourse itself does it: signup does not require an API key (it’s necessarily the public API), and after signup the software can use the user’s brand new API key to post to Discourse.
In Discourse, username and password (and then the session cookie) are used rather than a API key. But we would add an API endpoint that does a signup and creates and returns a new API key at the same time.
Spambot management
The challenge will however be that this software allows users to sign up and then post to Discourse without even an e-mail address confirmation, or the trust level based restrictions by Discourse. So we’ll need to replace these mechanisms with mechanisms of our own, and will have to be a bit creative about this. Such as:
Configuring the instance used by Natalia with a static signup key, for example by adding it as a GET parameter to the bookmark URL used on the workstation where the software will run on Biennale.
In other cases, handing out links with one-time signup keys via e-mail campaigns.
In case of using such links in Twitter campaigns or other very public places, giving the signup links a limited lifetime of 1-3 days and hoping that spambots won’t find them before.
Anyway, the way how Discourse avoids 98% of all spam signups is that spambots are too dumb to use a JavaScript SPA. They can only use HTML forms. We’d have the same advantage with the software for Natalia as planned above, so might not need to invest much into anti-spam measures.
This does not solve the issue of reading a lot of content without risking being throttled though. But perhaps I’m worrying too much about that, might not be a problem in practice. There is also the issue of reading content from protected web-content categories, but we can just solve that by not protecting them.
In other words: API rate limiting should be configured at least so generous that human consumption of content does not run into the limit, ever. Anything else that runs into the rate limit most probably indicates that a more specialized custom API is needed – one that will just return the content for final human consumption and not a larger amount of intermediate content / data.
As a simplistic example, in Discourse itself the category overview lists contain post counts per topic. So the Discourse JavaScript client does not have to fetch all topics to count the posts itself.
Same solution. The JavaScript client software should only be able to get statistics etc. about access protected content, not the content itself if and while the user’s API key has no access to it. And for getting this statistical information, we can offer a public / not access protected custom Discourse API endpoint.
Hmm. Not sure what you mean here. This means that with this implementation we can’t use Discourse as a CMS and keep the web content posts hidden as we do now. Or am I missing something?
For the “Discourse as CMS” feature, indeed the Discourse topics cannot be access protected anymore. To achieve the same (not confusing our Discourse users), we can however do one of the following:
Set the individual topics to “invisible” and also, for CMS editor users who don’t happen to be mods, provide a list of links to them in some other topic.
Hide the category from the category list page by CSS.
See if we can get the “Suppress category from category list” setting back into Discourse, perhaps as a plugin. This was present until a few months ago, then removed by Discourse.
The other issue is how to manage edit rights for these topics. Currently, wiki topics in a category accessible only with a certain group membership suits us nicely.
In the future, the category would have to be publicly visible. That’s permission “See” in the category settings. Right now “See” implies wiki edit rights, which seems to be a bug / rough edge in Discourse that we could fix. After the fix, a registered user without special group membership would be able to see content in the CMS category, but not edit the wikis. Only a user in a group with at least a “See / Edit Wiki” permission level to that category would be able to edit it.
@Matthias is explaining it exactly the way I thought of it from the beginning - here we are on the same page. It is in fact storytelling, just using a different prop to make people talk, with an easier flow, broken down to steps, and flexible in terms of time and length. We also thought of displaying a progress bar - it will be sort of a Future Telling device, and the accuracy of the prognosis depends on answering all the questions, and i hope to encourage them to share more this way.
I won’t be able to prepare the required things today, but I will do it over the week.
Maybe it would be also useful for us to have a call, with the UX designer I’m working with as well? Possibly this is the fastest way to make things clear and plan steps forward? I’m quite flexible for the next days. (@hugi)
Thank you for all the brainpower you dedicated to figure it out.
People mean different things when they use the same words. With webdesign stuff it is much better if you do proper wireframes and a ux flow and then post it here. Then take the fact that everyone has a different relationship to language…
Yes, that’s probably good eventually. However my time has been short and I’m heading to the Borderland after the skunkworks in Brussels. But you can start designing and bounce ideas with @owen on what’s possible - I will work with him to do the coding.
Seriously @natalia_skoczylas it will be better if you ask them to produce the material I asked for so we speed things along and are sure everyone is on the same page. Only after does it make sense to have a phonecall.