What is the Jura protocol?

It’s ultrafast, lightweight and feeless. The Jura protocol is a completely new way of looking at blockchain that has the potential of achieving over 100 thousand TPS, or transactions per second, while supporting all the Dapps and smart contracts that we know and love. It’s customizable to different industries and applications, ranging from IoT to finance to medicine to transportation.

The Jura protocol is essentially a suite of four different innovations all wrapped into one: an individual account-based directed acyclic graph (DAG) data structure we’ve named the Fusus, a Proof of Utility (PoU) consensus mechanism, a dynamically monitored and distributed sharding (DMDS) technique for distributing data, and an AI security and learning layer to prevent malicious attacks from happening.

Fusus Data Structure

What is the Fusus and what’s so special about it?

At the core of Jura is the Fusus: a DAG-based data structure that we invented. Every user account has a unique Fusus data structure associated with the account’s private key that stores their personal data and handles transactions. However, the Fusus prioritizes receiving and sending transactions differently: while receiving transactions are stored in a DAG, sending transactions condense all of the prior receiving transactions into a single new genesis block. By rehashing and pruning the prior history of the account, we can get much higher lookup efficiency and lower storage requirements while retaining all of the information.

Because of this, the levels of TPS that our system can achieve is not significantly affected by network traffic volume: in periods of low traffic, the corresponding Fusus will resemble a linear ledger structure, while periods of high traffic when an account has a lot of receiving transactions in a short time, the Fusus take on a DAG-like structure. Note that only individual accounts are stored as DAG-like data structures, not the entire system history.

What happens if there isn’t a new sending transaction? What if I use my account solely to receive payments and nothing else?

After a new receiving transaction arrives, the Fusus data structure will wait for a set amount time (i.e. a couple minutes), then an auto-reset will kick in and create a new genesis block, thus summarizing all of the receiving transactions like usual, just without the sending transaction.

What’s preventing spamming attacks from happening?

We have a proof of verifiable random time (PoVRT) module that prevents this from happening. Essentially what happens here is that after sending a transaction, a user will have to wait a small amount of time before sending another transaction. This small random amount of time will only range from several millisecond to single digit seconds, and will be updated and broadcasted to the system in frequent intervals. If a user tries to initiate a second transaction without waiting the required amount of time, the transaction will be rejected and must be reinitiated after a longer time interval. If the user fails to wait the appropriate amount of time again, the transaction will again be rejected and the user must now wait an even longer time. If this sounds like what happens when you fail to unlock your cellphone multiple times, you’re right: it’s the same concept, in the sense that the lockout time will increase exponentially.

Why such a big range for the prolonged spawning and auto-reset times?

We want a platform that is readily customizable to different industries. For example, in applications that require large, frequent transactions, such as finance, the PoVRT delay times will be much shorter than other applications, such as IoT, where sending and receiving transactions are more further spaced apart. Again, nothing is hard-coded into our system apart from the auto-reset time, and the spawning delays will be randomized by the system itself.

PoU Consensus Mechanism

What is PoU and how is it different than PoW or PoS?

Think of PoU as a sort of credit score: a single value that describes the trustworthiness of an account by taking into account a range of different variables, and in a consensus mechanism, votes from an entity with a higher PoU would be weighted more than someone with a lower PoU. Our model thus far takes four different variables into account: the stake size, the age of the account, the average staking time, and the last staking time. Each of these variables is assigned a dynamically changing weight that will adjust itself in accordance to the system demographics.

In that sense, PoU is akin to a multidimensional, dynamic PoS, except one cannot game the system simply by controlling the majority of the stakes. Although we know which variables go into making a higher PoU, we don’t know what the weights of each one are, and furthermore, the weights will change over time. This effectively guarantees that no one will remain king of the hill.

Why these four variables? Why only four?

We chose these four because these provide a more holistic view of a user’s history of established trust compared to just stake size by itself. Although more variables will undoubtedly yield more insight into the trustworthiness of the user account, having too many will slow down the system. These four blend together stake size, user history, and account usage.

Note that while we’re using these four variables right now to build up our system, new variables may be introduced prior to test and / or main net launch. The basic idea is still the same though: by having a multivariate, dynamic consensus mechanism, we can prevent anyone from gaming the system while more accurately gauging the trustworthiness of the user.

Wouldn’t calculating PoU instead of PoS or PoW cause unnecessary strain on the system and slow things down overall?

Not exactly. Determining the weights for each of the variables will be done by a set public algorithm that takes in both system demographics and random number elements. This calculation is relatively straightforward and adjustment of the PoU scores can be done quickly.

Can’t I still get a high PoU score by being very, very strong in one variable while being weak in another?

For example, let’s say someone new to the system decides to purchase a huge stake: although their stake size is large and would contribute to a high PoU score, their cumulative stake time, average interstaking time, and last staking time data all contribute to a lower PoU score.

PoU scores are all bounded between 0 and 1, and each variable is evaluated according to a cumulative density function (CDF). That is, if a user has twenty times the stake size as another user, his or her PoU score will only be marginally higher, provided all other variables are the same. Because of the asymptotic nature of CDF curves as any value approaches infinity, no one can ever attain a “perfect” PoU score. If a malicious user were to attempt to gain the highest voting power possible, he or she would essentially be forced to establish an account that follows the rules of the system for a very long period of time — essentially, they would be converted to being a good citizen of the system.

Sharding (DMDS)

What is sharding?

Think: divide and conquer — instead of tackling a large problem at once in a P2P network, why not divide it up into smaller strategic groups? This essentially cuts down on inter-node traffic, latency time, etc., thus making the entire system more efficient.

How is the system built?

The sharding system contains three layers: a router layer, a validation cache layer, and a monitor layer. When a transaction enters the system, it will be assigned to a shard by a router, and pre-validated by the validation cache layer. In each shard, the constituents vote on the validity of the transaction according to PoU scores. Each full node is overseen by a moderator, or monitor from the monitor layer, whose role is covered in the next section.

What roles are there in the system?

First and foremost, we have our storagers: users who provide storage space to carry out transactions. Each full node is overseen by a monitor: users who have high PoU scores and function as moderators of sorts for the shard. Monitors assure that storage space is sufficient for the upcoming calculations, that CPU is adequate, etc. If a lot of nodes in the shard drop out or a shard experiences a malfunction, monitors will modify router layer to route traffic to other shard so that transactions will be carried out as planned. If a monitor detects a suspicious transaction, it alerts the judgers. Judgers, or users with the highest PoU scores, will function as the final decision makers on whether or not a transaction or user behavior is legit or malicious.

What happens to malicious nodes?

If a node is deemed malicious by the judger panel, a series of soft or hard locks may be imposed depending on the severity of the attempted activity. Staking penalties may be imposed, PoU score penalties may be levied, etc., in order to assure that dishonest nodes do not attain the same level of trust as they once had.

AI Security Layer

What is the purpose of the AI security layer?

It’s just that: by using artificial intelligence and machine learning, we can design algorithms to automatically detect what is most likely a malicious node or suspicious transaction and either block it entirely or send it to the judging layer for further analysis. This furthermore yields increased accuracy over time as the code learns from itself.

Doesn’t this require a central authority?

In the sense that there will be some level of centralization in the first year or two of this project, yes. Although ideally there would be no centralization, offline learning will be required for us to design, build, and test the algorithms to automatically differentiate harmless from potentially malicious. Anonymized data will be processed by vetted members of our team with the hope that in the near future, the detection of suspicious activity on our network can be fully automated.

How is the AI security layer built?

There are three modules that comprise our AI framework. First up is the online logging module, where we will select features of transaction behaviors that will be processed by each full node and log them to collect training data. To get things started, we will use our own, generated, abnormal transactions, transaction behaviors, and malicious nodes. In the future, we will have anonymized human-reviewed data for training. This will be coupled with the offline training module, which will continuously use logged data to better predict what constitutes malicious behavior. Finally, the online serving modules, located in the monitor layer, will be used to detect malicious transactions and nodes.

System Rewards and Token Economics

Who gets system rewards?

All storagers, monitors, and judgers are eligible for system rewards for providing ample storage space, catching malicious behavior, etc. We have plenty of tokens available for initial adapters of our system and to account for inflation over time — enough to see the system through over fifty years of activity by our calculations.

Who are the miners?

There are no miners in this system in the traditional sense.

Further Reading

Who are you and where are you based?

We’re currently a team of seven members with academic backgrounds in computer science, applied and pure mathematics, engineering, and statistics. In terms of business experience though, our members have worked in top tier tech companies as well as investment banking and venture capital firms. At the moment, our engineering team is based in China, while our design and business teams are based in United States. For more info, check out our website down below.

Where can I learn more about Jura?

Head over to our website at https://jura.network for the full white paper as well as a quick one pager highlighting the key features of our technology. Also, feel free to follow us on any of the following media channels for the latest updates and news:

- Twitter: https://twitter.com/JuraProtocol

- Medium: https://medium.com/@juraprotocol

- Telegram: https://t.me/juranetwork (@juranetwork)