Back to motions
Treasury Proposal#48 >> Council Motion#217
Executed

#217 Validator Resource Center and Ranking Website

Proposer:
jam10o
 
in Treasury
23rd Sep '20

We have been working on an idea that’s been recurrent among some of our community members: A Validators Resource Center.

The goal is to build a page focusing on validators information on Kusama Network. The website, to eventually live on https://kusama.network/ as subpage, aims to provide quantitative and qualitative data about validators’ performance and help nominators to choose the best nomination set that works for them.

The development of this project has 2 phases: we would appreciate your feedback and comments on it. A member of the Kusama Dev Team, Mario (the mind behind https://polkastats.io/) wishes to implement this once everything is defined and the work will be complemented with @Jonas from W3F, @will from Parity and my feedback.

You can find a full explanation of the project here.

Show More

Expert Image
No Expert Review Available!An Expert adds their valuable review for this post!An expert? Add your Review!

Council Votes

EGVQ...5eYo
Aye
JKoS...GNC3
Aye
GLVe...F7wj
Aye
GPA7...KK3x
Aye
J9nD...8yuK
Aye
EDky...Xug4
Aye
DMF8...MSXU
Aye
DfiS...sBd4
Aye
DWUA...TJ1j
Aye
Hjui...vtis
Aye
Please Log In to comment
Users are saying...
Based on all comments and replies

Overall 42 % of users are feeling optimistic. The Validator Resource Center and Ranking Website, developed by Colm3na, has reached significant milestones with completed qualitative data form, quantitative data collection from chain, UI implementation based on Iggy's suggestions, integration with Polkadot JS extension for account selection, and customization using built-in theme support. The dApp now supports Kusama, Polkadot, Edgeware, and Stafi networks, with additional improvements in address creation metrics and the removal of cluster member limitations. Feedback is welcome as the project continues to evolve.

Overall 57 % of users are feeling neutral. The Validator Ranking tool on Kusama's website, funded by Treasury and developed for nominators to select validators based on preferences, has been integrated in phase one with manual filtering required. Phase two will introduce an automatic selection algorithm considering individual criteria like slashes, Era points, identity, commission, address creation date, subaccounts, nominators, governance participation, payout frequency and cluster memberships. Ryabina is invited to contribute to the second phase for accuracy improvements.

AI-generated from comments

Was this review helpful?No
32Comments
Day7...KzyJ
 
 
24th Sep '20

I find it really useful to have. It should improve the overall picture of gathering information about validators

HQBy...m2wb
 
 
13th Oct '20

Hi all,

Milestone 1 (Project kick-off) is completed, also I have advanced work for Milestones 2, 3 and 4.

Check first iteration here (warning! live/mocked data):

https://colm3na.github.io/kusama-validator-resource-center/

Feedback welcome! :-)

Milestone 1 tasks

Create repo and project with dependencies

A GitHub repo has been created:

https://github.com/Colm3na/kusama-validator-resource-center

Design UI and describe UX for every page

https://github.com/Colm3na/kusama-validator-resource-center/issues/3

Check first iteration here (live data):

https://colm3na.github.io/kusama-validator-resource-center/

Basic user flows are designed and implemented:

  • Exclude filter: functional for 'Inactive', '100% commission', 'No identity' and 'No verified identity' switches.
  • Pagination
  • Ordering
  • Page size
  • Select/unselect validator

Define UI design, color scheme, logos

Done, thanks to resources provided by Iggy from w3f. I tried to follow Kusama website look and feel as much as possible:

  • Logo
  • Fonts
  • Colors
  • Footer with links to Kusama Privacy Policy and Terms and Conditions pages

Define qualitative data structure (JSON)

Done: https://github.com/Colm3na/kusama-validator-resource-center/issues/2

Dockerize

This has been replaced by GitHub Pages implementation: https://github.com/Colm3na/kusama-validator-resource-center/issues/17

Di9w...Za8x
 
 
13th Oct '20

Very nice! Extra kudos for removing docker!

HQBy...m2wb
 
 
27th Oct '20

Hi all,

I want to share some updates about the project, Milestones 3 (data collection) and 4 (UI/UX) are nearly done (only a few pending tasks related to provide better description of metrics: how we get the data from chain and how it is the metric evaluated):

  • Iggy UI suggestions are implemented
  • New always visible selected validators widget
  • Added Self stake and Other stake columns
  • New search filter
  • New exclude filters (all of them are functional now)
  • All validator metrics and ratings are live (on-chain)
  • Validators are by default ordered by ratings sum
  • Improved responsive behaviour

Now the tool is pretty functional, please try it at:

https://colm3na.github.io/kusama-validator-resource-center/

Feedback welcome!

Apart from that I want to propose to add another Milestone (Milestone 5) to the project to implement these features:

  • Logic to disallow selecting more than one validator from a cluster
  • Integrate Polkadot JS extension to allow nominators to nominate selected validators.

I think integrate Polkadot JS extension is the logic path for this tool, allowing nominators to easily nominate its selected validators with a few clicks.

HQBy...m2wb
 
 
17th Nov '20

Hi all,

I want to update about the development of the Validator Resource Center and Ranking Website.

Milestone 2 (50% completed)

Milestone 3 (100% completed)

  • Define and collect quantitative data from chain: done. Now data is automatically refreshed every 5 minutes.
  • Lot of work in optimization for faster data collection.

Milestone 4 (100% completed)

  • UI implementation is done and all the suggestions made by Iggy: implemented (Thanks for the feedback!)
  • Validator set and exclude filter are stored in a cookie for data persistence.

Milestone 5 (100% completed)

  • Integrate Polkadot JS extension: done, you can select any account from extension. We check that there's no an ongoing election, the role of the account (should be a controller or stash/controller) and if there's some transferable funds to pay the extrinsic fee.

Additional work

The dApp can now be used to provide a validator ranking for every substrate based network that uses these pallets:

  • Staking
  • Identity
  • Democracy
  • Council

Currently supports Kusama, Polkadot, Edgeware and Stafi out of the box, you only need to uncomment the desired network config in config.js and reload the dApp. Documentation about how to use for another network has been added to README, along with some screenshots. Customization should be straighforward using the built-in theme support. Thanks to all who contributed with cool ideas and feedback!

The next step for us is to organise a migration of the live version to Kusama.network, and continue with phase 2 of the project.

Feedback welcome!

Di9w...Za8x
 
 
18th Nov '20

Very nice work, looking forward to further progress!

GLVe...F7wj
 
 
27th Nov '20

The Validator Ranking, to be used by nominators to elect the best set of validators based on their preferences is now integrated to Kusama website! By the community, for the community & funded 100 % on-chain by the Treasury!

The website aims to provide quantitative and qualitative data about validators’ performance and help nominators choose the best nomination set that works for them. On this phase of the project, The Ranking works manually and each user needs to filter their preference in order to see the best option for them.

Phase 2 (to kick off after Council vote on the proposal) Implements an automatic selection algorithm which values the user’s individual preferences (e.g., trade-offs on security vs. reputation vs. profitability) and gives recommendations based on these preferences.

EYBF...64Hm
 
 
30th Nov '20

Hi, we in the Ryabina Team like the idea of creating a ranking of validators. We agree that this could greatly help the community.

We would like to propose several improvements and we call up everyone to evaluate them and join the discussion.

1. Weight rankings.

Current implementation: Validators are sorted by amount of total rankings of all validator’s criteria, listed on this page: https://validators.kusama.network/metrics. Total ranking is not based on the “weight” of each criterion, and different criterion ranking count as the same. For example “Validator has an unapplied slash” is equal to “Validator doesn't use a sub-identity”.

Solution: We propose to implement the “weight” of every criterion to distinguish what ranking is more or less important for total score.

2. Parameter historySize

Current implementation: Parameter historySize is not public and not changeable, though the most criteria are related to it (slash, avg. era points, participating in governance, commission). The depth of view is limited to 1 week (nominator would see if a validator was slashed during the last week and there no possibility to check its slashings for a longer period)

Solution: We propose adding this parameter to the main web page (not only in descriptions) and make it changeable in some range.

3. Cluster members

Current implementation: The mechanism of cluster detection is implemented incorrectly. There are validators that don't use sub accounts and though being a cluster in fact they are still listed after filtering. As an example, when this filter is activated, we get the following result:

  • Ryabina - 0
  • P2P - 0
  • ZUG - 1
  • Polkastats - 2
  • DragonStake - 2
  • Cryptium Labs - 15

Solution: It seems to be more accurate to analyze not only subaccounts but the identity of validators. Until then, turn off this filter.

Current implementation: Users can include/exclude validators from the same cluster with the filter, but even when this filter is not applied, there still no possibility ) to nominate several validators from the same cluster, even if it is the nominator’s will. Also, this logic conflicts with sub_accounts criterion: validators that have sub accounts gain the additional ranking points, but inability to nominate more than one and the "Cluster Member" filter limits them at the same time.

Solution: We believe that this tool should help a nominator, but not be a censor and do not decide for a nominator who he should and should not nominate. This right should be fully reserved for the nominator. Remove hardcoded inability to nominate more than one cluster’s validator.

4. Criterion “Address creation date”

Current implementation: The model does not consider validator’s sub accounts, ranking of **old validator’s sub account is equal to the new validator’s account. **

Solution: This parameter has to count the age of the parent validator account. Current implementation: Coefficient 4 is very strict. https://github.com/Colm3na/kusama-validator-resource-center/blob/master/components/metrics/Address.vue#L70 To get a minimum score a new validator will wait 119 days, and 1067 days for the maximum score. And with the age of the network this period will only increase.

Solution: We propose creating levels, for example. < 600000 blocks - Bad; > 4200000 blocks - Very Good.

5. Criterion “Nominators”

Current implementation: This criterion uses the quantity of active nominators. This criterion can not be used as it was planned to, because of a post-Phragmen optimisation algorithm implemented in Kusama network, which is aimed at reducing the final number of nominators. Also, the cumulative quantity of nominators should be taken into account, because this quantity can be manipulated: bad actors can create a lot of nominators with small stakes.

Solution: We propose to review the logic of this criterion. Change the calculation of the number of nominators, set the nomination threshold to avoid abuse. Or remove this criterion from the ranking.

6. Criterion “Commission over time”

Current implementation: The following criteria are used now: Commission is less or equal than 5% greater than 5% and less or equal than 10% and decrease over time As a result a validator with 1% fee gets the same rating point as the validator with 10% fee in the beginning of the period and 9,9% in the end. Also, setting a ranking score equals “Bad” for 0% and “Very Good” for 0.1% seems weird. With such an approach of fee assessment nominators’ interests are not respected.

Solution: This criterion should be redesigned.

Conclusion.

The current implementation has logical contradictions and mistakes, that is why this product now is not ready for use by all nominators and embedding it on the official kusama site was premature.

Di9w...Za8x
 
 
1st Dec '20

Excellent feedback, I can only agree.

GLVe...F7wj
 
 
1st Dec '20

Some answers to Ryabina's concerns below:

  1. Weight rankings: All metrics have the same weight in Phase 1: the user has the power to exclude those validators based on their preferences, this is also why the criteria is published there.

  2. historySize can be selected before data collection, this is not a problem: but selecting full history can take a long time and most users will in the end select one week. This limitation will not be implemented as it is now on phase 2 (development to start soon): the use of a backend to store historical data will help on this respect. Happy to discuss this implementation in detail.

  3. Cluster members: We agree that we are not catching cluster members that dont use sub-identity: this is the criteria used for clusters now, given majority of them use it - however, it is true some clusters can escape this criteria as implemented now. In general, it is agreed that clusters can be dangerous for the network: therefore we need a way to educate nominators on how to identify these and which criteria to use when electing validators from a cluster. In phase 1, this filter is defaulted: phase 2 will include the possibility to disable this. In order not to based this criteria on sub-identities, we are happy to discuss how to further improve the criteria.

  4. Nominators: We are presenting the same data as the source of truth (polkadot-js Apps): Apps in general marks validators as oversubscribed. Happy to discuss how to further improve this: We use it to illustrate both to show how far they are off from being oversubscribed and give a (biased) metric of popularity.

  5. Commission: The current metric is fair for validators, nominators and for network sustainability. Implementing charts for commission should expose those "tricks" on the validator page. Happy to redesign for accuracy, if needed.

It is important to understand the ranking its on its first phase to become a solution for all nominators - phase 2 of the project (to start soon) will cover most of these concerns, we would be happy to include Ryabina in the discussion to get input on the 2nd phase for accuracy and with the goal of improving the tool.

GLVe...F7wj
 
 
2nd Dec '20

Noting here some of the points discussed with Ryabina on a call yesterday, for the team to fix the issues:

Action points before phase II:

1. Weight Ranking

  • Goal: criteria across the ranking should weight differently, so we need to iterate on the criteria adding a weight to each that ultimately affects the ranking list
  • Weak point: what is an objective PoV to prioritise one over the other?
  • Action (organised from higher to lower weight, thinking of the most profitable and secure option for nominators): slashes over time, Era points, Identity, Commission over time, Address creation date, Subaccounts, Nominators, Governance participation, Frequency of payouts.

2. historySize

  • Action: add info on the main site as a temporary solution and solve this point on phase 2 with the use of a backend.

3. Cluster members:

  • Action: disable the default + add info for nominators for awareness. Phase 2: Add the cluster limit criteria as an option to be enabled by user (default: ON) + a combination of the analysis of sub-identities together with string-comparison of validator names

4. Address creation date:

  • Action: change creating levels, using not only sub0account but age of parent validator account.

5. Nominators:

  • Action to be taken on phase II: set the limit to only “oversubscribed”

6. Commission over time:

  • Action: phase II to show historical data on validators’ change of commissions. Weak point: what’s a critical change? How to define it without this becoming the exception?

Some of the first issues created for these changes:

https://github.com/Colm3na/kusama-validator-resource-center/issues/89

https://github.com/Colm3na/kusama-validator-resource-center/issues/88

https://github.com/Colm3na/kusama-validator-resource-center/issues/87

HNgz...W1V7
 
 
3rd Dec '20

Nice conversation here. Thanks to all. I mostly agree with all the points. But I would like to comment on the "Cluster filter". I see this tool as an opportunity to improve, balance and decentralize the network for the common wealth. At the end that will surely conflict with clusters own interest.

I do not agree that letting the users to chose just one validator per cluster is a bad thing. It does not mean censoring for me as they can nominate freely using other tools. We are just introducing frictions and the final desicion should lay on the council. Clusters theirself have their own tools where I am sure they do not allow to chose anything but their validators.

To sumarize, the validator ranking is a Kusama governance arm tool that is able to encourage and direct voting power in the benefit of the network. That means, among other things, reducing visibility of clusters and increasing visibility of independent, diverse and reliable validators.

HQBy...m2wb
 
 
3rd Dec '20

Hi all,

Some improvements for address creation metric are already implemented:

https://github.com/Colm3na/kusama-validator-resource-center/pull/91

  • Use the best value (older address) between the validator stash address and its parent identity address for rating.
  • Update metric definition in /metrics page and README
  • Show info about both addresses in metric description
  • Add loading text

NOTE: Changes are not deployed to production yet

HQBy...m2wb
 
 
3rd Dec '20

Also finished:

Added info about history size:

https://github.com/Colm3na/kusama-validator-resource-center/pull/92

  • Added text about history size is limited
  • Added warning alert about platform is under development and metrics are subject to change

Replace 1 cluster member per set limitation and include warning in nominate page:

https://github.com/Colm3na/kusama-validator-resource-center/pull/94

  • Removed Cluster members exclude filter
  • Removed 1 cluster member per set limitation (only show a warning)
  • Included warning in nominate page if the validator set includes more than 1 members of a cluster
DfiS...sBd4
 
 
7th Dec '20

I like Ryabina's post, it looks very thoughtful. Very like idea of weight rankinng.

My two cents:

  • at the moment I myself cannot understand how the nominator would like to be a part of the cluster or not (on the one hand, this may be good, since the validator has a larger infrastructure, on the other hand, it may be bad, since it is smaller devotes time to each validator and its clients, and can also worsen decentralization). In this regard, I would remove the value judgment from this criterion (good/bad), and leave just numbers, and would not take this into account in the default rating;

  • it does not seem reasonable to me to limit the evaluation of the "nominators" only to "oversubscribed", since in my opinion it is important how many people have entrusted their stake as a nominators to this validator (more is better);

  • in addition to the commission, I would also add the "commission volatility" parameter, which, in my opinion, is much more important than just the "commission" parameter. It seems that any formula for calculating volatility should be working, you just need to choose the right boundary for the assessment.

Thanks participants for the great discussion!

HNgz...W1V7
 
 
7th Dec '20
  • Whether most of us could agree that a +40 validators cluster is a bad thing to avoid I do agree that tagging with a good / bad label is not the better approach.

  • Given that the 1 thousand validators now lets 2 validators per entity I would leaverage that fact to use numbers instead of good or bad criterion.

  • It would be nice if we could agree on a healthy number of validator per entity. I would suggest that a number between 5 - 10 could do the trick for "good" while more than 30 seems to start meaning a high centralization, that is "bad".

Regards.


Discover similar proposals


Empty Icon

No Active Proposals