For connecting user app’s powered by https://www.causal-rt.org/ I want to implement support for a DHT based solution using P2P for transferring big chunks of data. So I want to use Jami’s communication base. OpenDHT is the one thing, but what do you use for the P2P communication? Do you even have something higher level which also includes key creation and management? Basically in causal things about communication are data centric while data is located in “spaces” and could be mutably or immutably borrowed for reading and/or modifying it. So if there is some abstraction level in Jami hitting just that view of communication I would love you to point at what I need to look at. However, if there is no such a layer and you just tell me what the parts are I put them together myself.
then I will want to implement a client for jami daemon. Since I will mainly use it to exchange message pack serialized data and do datastreaming, is there something I should mind. Probably is there something which shall not be done to keep jami network “clean”?
So the implementation will basically provide a communication layer for causal which could be used in whatever sense. However the application I want to implement, named Dory, will also do messaging(preferrably the jami way and there fore compatible) but also a lot more. There will even be a Python driven abstraction in Dory allowing people to implement own communication pattern as modules. So there might be, if successful, a lot of, from perspective of Jami Messenger, unexpected data floating around in DHT Network.
Probably there is a way to isolate non Jami Messenger comaptible data?
Is there any plan to support WiFi meshes? (Like for usage when internet is shut down in repressive regimes)
I don’t think we can easily answer to this without digging what you want exactly/what Jami provides.
But actually the API for the daemon is designed for “human” communication. This means that the API are for sending text messages/files/start calls, etc.
Sending arbitrary data on negotiated sockets between peers would probably needs some modifications for the daemon (anyway the daemon can be modified with your needs and if the API needs to be modified for a project, it can be discussed and sent as a patch). To be a bit more precise, Jami generally tries to negotiate what we called a “multiplexed socket” with the peers. This multiplexed socket can have multiple channels with names/id. And each channel can be used to pass the datas you want. So a daemon can add a new channel for the protocol they want and the other side will accept it or not.
To keep in mind, we are introducing a major change: Swarm (Swarm: a new generation of group conversations). The code is already in the daemon and the APIs are there, but it’s still in transition, but this means that during the transition, some APIs changes and there is a lot of deprecation.
Jami supports when the internet is shutdown. There is options to bootstrap on other clients and you can use any other node as a bootstrap.
The serialized data will always be in context of some human made content which most probably will be markdown. So we basically talk about metadata you add to a post, while or after post was done. So the idea was to treat this metadata as attachments.
if related to own post, message and attachment
if related to foreign post, message containing reference and attachment
That’s only required for synchronizing content. Except swarm is already synchronizes content between attendees. Also client synchronization was meant to work that way. Also I wanted to implement the possibility to “pin” data which can then bereceived by requester from all nodes having them pinned… a bit like torrent.
Thats a good thing.
The interaction will always happen from C++. At some point an own wrapping API for interaction will be provided for embedded python scripting. But that*s further future. It also makes no sense to reuse any API for higher level use since we are talking about two different worlds of how program-flow is thought.
However that python script is a wonderful “HOWTO”. I’d like to read the rust code, but I’m not sure if I want to learn rust if there is a python example for helping with kick off.
If you wouldn’t I’d need to do some trickery… so I’m glad you do. What I’m missing(or not?) are public channels where you can push data to general availability. Kind of a public swarm. However, this swarm thing seems to be you thinking about what I need as base for my stuff
The project is actually starting. So API changes won’t be a problem as using code will also change permanently. This even could be a chance to bring in my needs and have a higher chance for them to be introduced together with other things.
are public channels where you can push data to general availability.
For now, we are introducing 1:1 swarm. We hope we will quickly provide the UI to support small private groups (the daemon’s API already support this). For big groups (>8) and for public groups, it’s not yet in the daemon nor totally designed.
Uh, a limitation to max 8 hurts, a lot. What means not totally designed? Whats the Prio? So will this be done in… lets say half a year? Can I help to speed it up, like “you basically know whats to do, tell me, I go for it and deliver a patch”?