Grand visions: How do we tie things together?

(This post by @leobard was originally a reply in another thread, but @hugi thought it was interesting enough to warrant forking into a new thread)

@hugi I am interested to give advice and feedback and test designs. My main feedback: we see that Burners communicate via with talk/slack, document in a hodgepodge of gdocs, ignore wikis, and the dreams platform is a boring place in the middle only gettig attention when its heart-giving time. My vision would he to have a uniform plattform with one login for Burnertickets, Dreams, a Wiki, and a communication (slack/talk) platform. It would be great to have public (dreams) and private (helm/site lead, nomad, rangers, welfare) pages and channels. Everyone who buys ticket (burnertickets) would be auto-signed up for all platforms to be enabled to collaborate - and to be referenced (@-mentioned). The hypothesis: more information would end up in a central space and communication about a document (lets say, a wiki page with the lineup of a stage) would include all people involved (via @-mentions) and get their consent/feedback quicker.

I would know how to code that but have time constraints.

Background: I have a phd in IT, I invented semantic wikis, I was part of the Semantic Web Interest Group at W3C, I have product management experience in IT. I am Burning Man RC in Austria, organized small burns. No time for coding for particip.io as I am a fulltime employee. I could make inspirational videos though.

2 Likes

I’ve thought about this, and it’s interesting, but monolithic uniform systems are nefariously difficult to maintain and tend to start feeling out of date fast. I like your thinking on this though, but I think I’d opt for something more modular, for example working towards that Dreams, Realities, ticketing, and some communication platform share a single login through an API and perhaps even share a backend service to easily integrate with each other. It would feel like a single ecosystem for the user but would actually be many different services.

You’re most welcome! Great to have you here! Check out the Realities project and see if it’s something you’d like to get onboard with.

Federation is a great idea, I assume there isn’t much more than identity that is shared state between these systems. There are some great floss projects to build on here, e.g. pretix.

1 Like

Since we’re talking about different future strategies for a potential future system or suite of systems for running participatory events, I thought I’d put some ideas down that have been bouncing around my head for a while.

Click the image for a better resolution.

Note that this very hypothetical idea for an architecture includes a hypothetical new membership module for ticketing, and also assumes that we’d pretty much rebuild Dreams from scratch. I’m very on the fence about weather that’s a good idea, but it would allow for us to go into a sort of ecosystem thinking territory that we can’t venture into while our system are as insular as Dreams is now. We should however note that Realities is already being built in a way that would fit this model, hand in glove.

I’d love to hear what @krav and @leobard think about this?

Hope you don’t mind someone else butting it. I wouldn’t have all the apps sharing one database, mainly because as the database changes you need to make sure that each apps code is updated at the same time as the database schema. Generally you want to associate an app with a database and have each app provide a restful/versionable API for the other apps to consume.

As well as SSO I imagine you might want something like aws IAM. IAM allows people to create groups and roles that can do certain things and assign them to people. So the person that creates an art project could assign collaborators that can update information or alter project wiki pages or update Realities for their organisation.

There might be something open source that is appropriate. A quick googling suggest https://syncope.apache.org/ but it looks very corporate. Shibboleth would be full on federated but might be overkill to start with.

Background: I work in IT, mainly doing operations for AWS. Can code etc. About to go on sabbatical for a few months, but have a certain amount of stuff planned. Helped organise a Decompression a bit a while ago, but otherwise only burner-adjacent.

hey all!

I love the vision that is being presented here and the attempt to unite people in their efforts at all scales and to bring the best from all of these tools that are being practiced with different intensity in different communities.

The balancing act of things like single-sign-on and carrying the social graph with the posibility to evolve systems and be adaptive to the needs of the present resonate very much with the work I have been doing and I would love to present how I think that the tech I have been involving myself with over the past year solves a lot of pains and creates massive amounts of opportunites.

I’ve been following and participating in the development of the open source pattern and infrastructure Holochain, I’ve described it briefly in a response to an inquiry here on edgeryders but I’d like to point out here that this tech gives us the ability of maintaining data integrity and maintain the flexibility of designing for a specific local need, while being able to pull in modules generated in the global community whenever useful. This is done by bridging all systems on the level of the agent.

Apps talk to each other through you. Through you as a user, you can bring your identities and graph from one space to another and you don’t need to handle authentication at different spaces. You can also create the distributed data storage only loosly coupled to the front end interfaces that you want to use, this code is created with a main purpose of interoperability. I can tout it’s horn for some time but I’ll leave for you to check out the websites starting here perhaps.

As for maturity of the codebase, it’s pretty early days but I have been building prototypes on it to try it out. There is a go version that is no longer being actively developed as a major rewrite into Rust of the whole structure is currently being finished. Stable beta is expected to be Q1 next year. It’s pretty weird tech to work with if one is coming from a background of designing from a data-centric worldview but really powerful when you get into thinking of actor-centric programming. Basically you install ways of speaking and enter that data locally in a chain, then you choose to share what you have said into different shared spaces in order for others to validate and interact with it.

Would love to talk about it more, but am super unavailable for about a week and a half, so if I respond sparingly, bare with me :wink:

2 Likes

@hugi right, we want to reuse the existing systems and not make one monolith. Making a monolith ends with “insanity” as is the last word here: https://www.wired.com/1995/06/xanadu/

With platform I think all is there if we look at the web, web protocols, and linked data. The web is the platform. But I will make a video to illustrate that, it would take too long to concisely write that up. Hang on, gimme a minute…

1 Like

I’m proposing that we do something different here, and actually use a single graph database to serve the entire ecosystem. We’re already using a graph database for the Realities project, and since they don’t have schemas, but rather nodes and edges, it’s quite possible start with a database which only serves Realities and then later adding new node and edge types to it when adding Dreams and Members modules to the ecosystem.

Obviously, this is just a quick example and not a full description of what types could exist:

In this example, Realities, Dreams and Members have been adding nodes and edges to the same database. Any system can create the Event node and all items created by that system are assigned a relates_to edge to a an event node. Any system can create the User node if it doesn’t exist, and it’s linked to the SSO. Apart from that different edges and nodes are created by different systems, but they can all see the same data.

  • Realities:
  • Nodes: Needs, Responsibilities, TalkDiscussions links
  • Edges: depends_on, fulfils, refers_to, realizes
  • Dreams:
  • Nodes: Dreams
  • Edges: Funded, Created
  • Members:
  • Nodes: Users
  • Edges: has_membership

It’s true that if we for example were to change the identifying label of a node or edge type, we’d have to update all systems, but I find it unlikely that it will happen often, if at all.

Another thing that becomes possible then is to explore how things fit together by looking at the graph in a GUI like the one I used to generate the image above. We’d have a database that is pretty close to an actual 1-to-1 representation of an organisation. Questions you could ask such a graph very easily could be things like:

  • Which responsibilities do most dreams depend on?
  • List all first and second order dependencies of a dream (i.e every need and responsibility the dreams depends on and everything that those needs and responsibilities depend on)
  • Which people are very many other people depending on for their dreams or responsibilities?
  • Which people carrying our responsibilities or creating dreams should I contact before making a decision about the burn perimeter?
  • Does spending all your grants in 2018 correlate with taking on responsibilities in 2019?

@hugi, @zaunders, @eb4890, @krav : Given that we want to have something working in foreseeable time, I support the idea of keeping separate systems, but going for an integration of login and integration of data.

For Login I propose OpenID.
For Data exchange, I propose the “Linked open data” standard (also called RDF, Resource description framework, also called microdata, also called Schema.org, also called RDFa, JSON-LD, etc… all the same idea and basically compatible).

To illustrate the approach, I did a video (actually 4 videos), which is more fun than writing. Checkout this Blog Post and the videos in it, they were made with love for YOU:

Please checkout if you like the idea. If you like it, I could connect you to the other linkeddata devs over at gitter. Also I am willing to write up some RDF/LinkedData examples of the markup and ideas how to embed it on the website so that it can be crawled. But please contact me, if you want. I have limited time and I would like to contribute only if my input is needed, if not, I am happy doing other things.

Note: Tim Berners-Lee is currently working on “Solid”, which should be a standard that combines OpenID with LinkedData but I haven’t tried it yet. So it may be awesome, it may be not, I don’t know, but checking out, I recommend.

@hugi I am curious what you think about my input.

I may note that I worked 10 years in the field, wasted about 16mio EUR in research funding to roll out a system to 4mio users, only to have it go out of use a few years later. On the way I found that using “one graph database to store all info” is nice in theory but brings performance down and developer commitment down. The proposed approach of loosely coupled but well working and quickly & easily developed systems would be better according to this experience.

I just want to avoid you making the same mistakes as me :slight_smile:

1 Like

Thank you @leobard! I’ve been offline all weekend and haven’t had time to look through the materials and videos yet, but I’m absolutely planning to do so in the next few days.

I’ve now both watched your videos, done some research to further understand the underlying tech and done some thinking around this @leobard. Thank you for putting in the time and care to make these videos. There was quite a lot to unpack there for me, so I’ll go through it step by step.

Sharing data between systems is the way to go

Ok, you, @eb4890, @krav and others have convinced me, the “one graph database” idea is not a good one. It’s not necessary and will just make cause more trouble than it’s worth. Thanks for talking me out of it.

I’m also throughly convinced that we’re going to share data between the different systems. Now what?

A short summary of Leobards proposal

Since it’s a lot to unpack in the videos, I’m going to make a very concise summary so that people can follow the conversation without watching all of the material (you should though, it’s a great primer into linked data). So, @leobard, correct me if I’ve got something wrong or missed something important. Even better, just change it yourself, since this post is a wiki.

Points on RDF

  1. Different systems solve different challenges, and to develop quickly and efficiently they should be built with whatever stack of technologies that fits the job and is familiar to the developers.

  2. Rather than different systems sharing a database, they should make data available to each other through shared standards.

  3. RDF is a tried, tested and open standard which Leobard recommends us to implement. In practice, this would mean that each application had an exporter which converted data from whatever database it uses to serialized RDF and makes it available at an endpoint.

  4. One benefit of using RDF is that there are a lot of standard schemas available to use on Schema.org which makes it easier for other developers to plug their systems into ours.

  5. Each resource (dream, person, event, membership, etc) is uniquely identified with URI, which in this case is a URL, for example http://dreams.theborderland.se/dreams/437 is the URI of that dream. We could then choose fitting resource type for a Dream from Schema.org, for example CreativeWork. We would use the schema properties when possible and our own custom properties when necessary.

  6. By sending serialized RDF data between applications, we could for example create a new thread on Talk for each Dream created on Dreams and list all posts in that thread on Dreams, as long as both those systems implement RDF exporters and listen to the each others endpoints.

  7. If Particip.io and it’s projects would join the RDF and Linked Data community we would find kindred spirits with a very similar ethic and ambition to ours, and make some very interesting friends and allies.

Points on authentication

  1. Leobard recommends OpenID for single sign-on.

  2. Tim Berners-Lees Solid implements both OpenID and RDF but is still in early stages.

Points on architecture

  1. A graph database is usually not a great choice for your primary data storage, but is fantastic for indexing and getting an overview of how systems fit together.

  2. A visualisation app built on a graph database could easily pull all the standardised exported RDF data from the different applications and keep an updated graph of how things fit together.

My take on it and how it fits with our current momentum

In general, I love this way of thinking, and I’m convinced by the arguments. Like I mentioned above, this has made me back away from the single database idea and made me realize it’s better to opt for sharing data between applications.

However, we also need to move with our current momentum. We currently have two applications within the Particip.io umbrella that we’ve invested time and money in.

Dreams

Dreams is now almost three years old, and has seen substantial development. It’s built on Ruby on Rails, and while it does not have any APIs currently, implementing an RDF exporter and have it listen to other endpoints would not be a lot of work if we’d decide on a good strategy.

Realities

Realities is a new system that will be at the core of how we make sense of the Borderland in 2019. We’re close to finalizing the first development cycle as @erikfrisk will deliver the first MVP in the coming weeks. It’s already deployed on a staging server for testing.

Realities is a javascript single page app built on Node, GraphQL, Apollo and React with Neo4j as a database. In this case, we’re indeed using a graph database as the primary data, but only because Realities is not really much else than an application to edit and view the graph that describes how the needs, responsibilities and roles in an organisation are structured and depend on each other.

React was chosen because it’s the front end framework with the biggest momentum today, and it’s also what most of the developers were familiar with. GraphQL was chosen because it’s become an increasingly popular replacement for REST APIs with React developers. GraphQL works well with graphs and it has seen some use with Neo4j as it’s database.

Neo4j was chosen mainly because it was the graph database with which I was most familiar at the time, and it’s a system that’s easy to prototype models in.

All of the technologies chosen are open source. Apollo, React and GraphQL fall under the MIT license and Neo4j falls under the GPL3 license.

GraphQL vs RDF?

In your video you described how Facebook and other walled gardens greatly decreased interest in RDF. Ironically, once Facebook grew as huge as it is, it needed to find good ways to share data within it’s walled garden and built GraphQL, which it has now released it’s patent for. Interestingly, some of the RDF terminology is used by GraphQL, which recieves JSON schema templates to request data and then returns data in the same format as the request itself. You can see these in action in the Realities code. I’m not sure if you’re familiar with GraphQL, but essentially Queries are only requesting data while Mutations are making changes.

Watching your videos made me realise that it would probably be pretty easy to implement a GraphQL endpoint in Dreams too. There is a highly maintained GraphQL library for Ruby on Rails which we could use to relatively easily interface Dreams and Realities! That’s a pretty interesting idea since @questioneer who is joining us for the development week in Medenine in November is a Rails developer with GraphQL experience.

As for having a graph database collect all relevant data, the Realities graph database could simply double as an index for all data from Talk, Dreams and other systems, and we could build views to show and filter that data as we keep developing Realities to also show Dreams connected to Needs and Responsibilities. As long as all new applications we build also implement GraphQL, this would be reasonably straight forward.

But what about RDF, Solid and the linked data community?

Damn it. Right. I really agree with you that it would be cool to plug into that world. How does RDF and GraphQL overlap? Is it two technologies solving a similar problem? Does one complement the other? Is there any sense in using RDF together with GraphQL? I don’t understand enough what an architecture using RDF looks like in practice compared to our Realities stack. Maybe based on the information and thinking above, you have some ideas @leobard?

This is a really interesting conversation, and I’m enjoying it. I’m going to tag @matthias because I have a little bit of a suspicion that we would also find this enjoyable. This is also relevant to some recent conversations I’ve had with @kristofer and also with @brooks.

3 Likes

I only have cursory knowledge of RDF and GraphQL. While I hear good things about GraphQL, I wonder when it comes to RDF what the overhead and benefits are compared to just making our own ad-hoc APIs? As far I understand it, none of the data is being designed to be published.

Are there other transports for RDF? Solid does not look all that mature to me. (That Berners-Lee is involved just makes me skeptical, for the amount of awards he’s gotten, the computer science professor has amazingly few publications on dblp).

I don’t really understand the scope of this project, so I find hard to make recommendations. As far as I can tell from the current state of Dreams and Realities, all that is needed is single sign-on.

@hugi thanks for getting inspired by the ideas about LinkedData.

First, Thom van Kalkeren over at https://gitter.im/linkeddata/chat recommended me to look at a framework that combines ideas from GraphQL and RDF: GitHub - cayleygraph/cayley: An open-source graph database . They use the GraphQL syntax to expose a RDF store: https://github.com/cayleygraph/cayley/blob/master/docs/GraphQL.md

##How I understand GraphQL’s role for integration

  • I looked at the code in realities/…/api/src/index.js and get the idea how to use GraphQL to get data.
  • The identification of things in GraphQL is always relative to the API you are using: if you want to know “what is the Name of Dream #123”, you query like {dream (id:"123){name}}.
  • There is no global ID scheme, I understand this is by design: graphql.github.io has moved
  • Data from different sources cannot be combined by design. Rather, you query one source after another. To get the GPS Position of Dream #123, you would query another service and go for something like {gpspositions (id:“123”, type:“Dream”){lat lon}}

##How I understand Linked Data for integration
In Linked Open Data it would work differently

  • The ID of the dream would be its URI: http://dreams.theborderland.se/dreams/349
  • The core data the dream platform knows about this dream would be expressed as RDF, for example as “creative work” from schema.org. Example in JSON-LD { "@id": "http://dreams.theborderland.se/dreams/349", "@context": "http://schema.org", "@type": "CreativeWork", "author": "Salon Leobard", "image": "http://s3-eu-central-1.amazonaws.com/images.dreams.borderland/images/attachments/000/001/342/large/5.jpg?1525040826", "name": "Hammock Reactor" }
  • The dreams platform can publish this data in different ways. You would chose the way that is most easy to implement and also to parse the data later. I list some of the ways I think are useful:
  • a) use the ontola-rdf-serializer for Rails that lets you expose your Ruby Objects as RDF: GitHub - ontola/rdf-serializers: Adds RDF serialization, like n-triples, json-ld or turtle, to Ruby on Rails active model serializers . This builds on top of GitHub - rails-api/active_model_serializers: ActiveModel::Serializer implementation and Rails hooks. It was recommended to me today by Thom van Kalkeren over at https://gitter.im/linkeddata/chat
  • b) Embed the JSON inside the HTML of the Dreams page.
  • c) Return the JSON as separate document/URI (example: http://dreams.theborderland.se/dreams/349.json) and refer to this URI from the dreams page using a link from the HEAD, such as <link rel="meta" type="application/ld+json" href="/dreams/349.json"/>.
  • d) Provide a data-dump of all dreams at http://dreams.theborderland.se/alldata.json . This could point to a service that dumps all data as one big RDF stream (file).
  • In theory, there are tons of more ways how to serve RDF, but these four are the ones I think are most realistic to use in our scenario. a) may be easy to implement. b) and c) would be typical ways how people publish linked data so that search engines (google, …) can find them. d) is a typical hack that goes a long way.
  • The realities platform could access the linked data in two ways: on demand or indexed.
  • On demand: The realities just knows that it can HTTP-get http://dreams.theborderland.se/dreams/349.json and know that it is linked data. Whenever needed, it HTTP-Gets the JSON and renders the parts it needs.
  • Indexed: Realities gets the http://dreams.theborderland.se/alldata.json regularily (for example, daily, there won’t be many changes on dreams) and stores it into a SPARQL Database.
  • In the “indexed” case, the fact that globally unique IDs are used comes into play: The SPARQL database can store tons of RDF data in just one big RDF database without any configuration. You just dump one file after another into the database. As all files reference globally unique IDS (URIs) the graph automatically builds itself and fills itself with data.

Adding more applications (with Linked Data)

The interesting part starts, when more applications come into play, then Linked Data has more meaning:

  • Lets go further: There is a “Google Maps Application”, which may manage the GPS position of the Dream. That would be (in JSON-LD): {"@id": "http://dreams.theborderland.se/dreams/349", "@context": "http://www.w3.org/2003/01/geo/wgs84_pos#", "lat": 55.6154473, "long": 12.1574238}. Again, this data would be available, together with all other positions of all other things you can have on the playa, in a file served from somewhere, lets say http://myplacementapp.com/borderland/placement.json
  • In the “indexed” case, you would know that all placement data is from there and you would also retrieve this data and store it into your SPARQL Store.
  • Now you could query your SPARQL with queries such as SELECT ?r ?p ?o WHERE {<http://dreams.theborderland.se/dreams/349> ?p ?o.}. The result of this query would return the data from both the dreams platform and the map - combined. The patterns in this query stand for “?subject ?predicate ?object”. Its comparable to “key value” in Json. The differences are that in linked data, keys and values are always attached to a subject they talk about, identified by a URI; and that the values are called “object” as they can be subjects themselves elsewhere or simple data values.
  • In Ruby, you could for example use https://github.com/ruby-rdf/sparql-client to read/write data to a SPARQL store
  • The power of LinkedData is that you use globally unique IDs to identify resources across systems - you can easily combine two systems if they “link to each other’s data” in the first place. We saw this in the example of the “Google Maps Application” that already referenced the Dream by its URI. It was clear that it places things that come from another webpage. The “google maps application” could be used to place anything - not only dreams. You would go into the application and it would ask you “what do you want to place? please enter the URI here”. After entering the URI, such as http://dreams.theborderland.se/dreams/349, the placement app would use the above methods (a,b,c) to find out what this is and then say "ah, ok, I got some JSON-LD, you want to place a CreativeWork with name “Hammock Reactor”.
  • The same with the data. Each app can use its own data “scheme”, and again the ID of the properties is global. So while the dreams platform would use schema.org to state name, author, image, the placement/maps app would use GEO-Scheme to express latitude and longitude. The RDF store stores the data internally with the keys being the full URIs (i.e. http://schema.org/author, http://www.w3.org/2003/01/geo/wgs84_pos#lat) So even if both apps define a “name”, if they use different schemas, both names would be stored and would be queryable.
  • This design allows you to connect various systems before you even know what systems they are going to be. And connect various data formats before you know what exactly they are going to be.

… if you consider RDF Storage to index data

Now if you want to build a large index, you would need an RDF database. The SPARQL databases are interesting thingies.

  • I can recommend Apache JENA/Joseki Server, it is simple and works
  • I can recommend OpenRDF/rdf4j/Sesame, it is a bit more powerful and also works. There is an interesting high-performance backend (http://marmotta.apache.org/kiwi/).
  • I also find CayLey looks pretty cool and works on top of SQL/NOSQL databases, which would be a good idea for backup/data safety.
  • Virtuoso is another store that is complex, powerful, but a beast to get started and to tame.

In all cases: use SPARQL to query.

Comparison

I think the key difference is that RDF is built for heterogeneous systems where multiple apps are going to publish/consume data and new apps can “join” anytime later. With GraphQL you would rather connect two systems for a specific purpose.

That makes the RDF apis slower to realize in the first place as you would have to decide which method you are going to use to publish the data, maybe also taking into account what client libraries you have for RDF. Compared to GraphQL, where you do not have to make so many decisions: you just have one single method how to publish the data - the GraphQL endpoint.

Another difference is combining data:

  • With GraphQL, you cannot just Dump the JSON from two different services into the same database and combine it, the Keys would clash and the data probably. GraphQL is made to be consumed by the GUI and presented.
  • With RDF, you could load multiple data sources and then query it as one.

What I think a wise developer for #realities could consider

If we look at the use cases I pointed out in the video, I would go for the following recommendation based on technical aspects:

  • For clear & easy APIs, use GraphQL. You have mapping frameworks you can plug on top of Ruby to expose your data model as GraphQL. You are good at consuming GraphQL. You will be quick and you will get results that work for you. You already made the decision to use React/GraphQL.
  • For data from one app included into another app: GraphQL.
  • To change data from another app: GraphQL.
  • For apps that talk about data from OTHER apps (such as “this mapping app is needed to place things on a map which are actually created/represented within other apps”), referencing to entities from other Apps using URIs would be a start.
  • If a big data dump is to be exposed by one app and consumed by another: Exchanging Data between Apps using linked data could be then considered.
  • For a big unifying read-only database where data from many apps is to be indexed to find out “all we know about this dream”, a SPARQL store and linked data could be interesting. The open question would be how to “crawl” the data and get it into the store, option d) from above would make it easy and this is a way many linked data apps work today. Google crawls both individual pages and sucks RDF out of them and also tries to get dumps. Another question would be when to crawl (regularly or - if we know when data changes - on demand). I could ask some people about that, if needed.

Social / Human aspects

  • You know GraphQL already. Developer support is better for GraphQL as it is a “trendy” framework, you will find tons of info. To quickly get useful, working stuff done within hours, GraphQL is probably the better choice for you.
  • LinkedData has been around since 1998 and it is heavily used by the search engines (Google) and for marking up data all over the web. But many of the tutorials you google online are outdated (as it has 20 years of history) and finding the “2018” stuff is trickier. You find a lot more sources about RDF, but its harder to decide what is useful and what not.

(sorry for the late reply, I hope it’s still in time)

Another excellent post @leobard! If nothing else, I’m really enjoying learning about the semantic web and linked data, and it’s a good reference conversation to point to since I’m probably quite a usual case when it comes to getting the web developer mainstream to think about linked data.

Yes, for the same reasons as you point to, this is my conclusion as well.

This could definitely be something to consider, especially as we might run into wanting to pull together data from a sources outside of our particular project. We will keep this option in mind and use what you’ve written here as guidance and reference.

Indeed this has been my experience in trying to research this lately. I’ve also spoken to a few developers about it, and most of them are pretty oblivious about RDF. If they’re familiar with it at all, they just know it by name, and usually only if they’ve been in the game for over 10 years. Running projects like ours, we’re usually not working with developers with 15+ years experience since most of them are busy with their full time jobs. In the long term, I’m hoping to build a development lab in which creative developers at all levels – veterans and fresh talent – can work on experimental and fun projects some of their time. Eventually I hope to tie a wide group of developers to the Participio Development Lab, and in the future I could absolutely see us tinkering with RDF and Solid.

I’ll quote the Participio mission statement here for reference:

Ping @jean_russell I hink this may be of interest to you guys…