A View On AWS Bitcoin Cloud Mining Scam

AWS mining - GPU mine Ethereum?

I don't care that it won't be profitable, I just want to know if it's actually possible and if someone could point me to a guide? Thanks in advance.
submitted by p00page to EtherMining [link] [comments]

IS THIS STILL WORKING? - Ethereum mining on AWS in 5mins

submitted by Mishef to EtherMining [link] [comments]

Is Ethereum GPU Cloud Mining with AWS, Hetzner or OVH Profitable?

Is Ethereum GPU Cloud Mining with AWS, Hetzner or OVH Profitable? submitted by patrick_k to BuyNHodl [link] [comments]

Mining Ethereum on AWS

Hi,
I have 5K credits on AWS and wondering if it's worth it to set up an ETH miner. How much could I possibly gain out of it?
Are there other coins that might be more profitable? I've heard ZCash might be.
Thanks!
submitted by t_a_232323 to ethereum [link] [comments]

[Beginner] I have 250$ AWS credit, is it worth it to mine this to Ethereum? Via CPU or GPU instances?

I have 250$ AWS credit, is it worth it to mine this to Ethereum? Via CPU or GPU instances?
Of course a loss would be acceptable because there is no other way for me to get this out of there, any idea how much that could net me?
submitted by learnjava to ethereum [link] [comments]

Dragonchain Great Reddit Scaling Bake-Off Public Proposal

Dragonchain Great Reddit Scaling Bake-Off Public Proposal

Dragonchain Public Proposal TL;DR:

Dragonchain has demonstrated twice Reddit’s entire total daily volume (votes, comments, and posts per Reddit 2019 Year in Review) in a 24-hour demo on an operational network. Every single transaction on Dragonchain is decentralized immediately through 5 levels of Dragon Net, and then secured with combined proof on Bitcoin, Ethereum, Ethereum Classic, and Binance Chain, via Interchain. At the time, in January 2020, the entire cost of the demo was approximately $25K on a single system (transaction fees locked at $0.0001/txn). With current fees (lowest fee $0.0000025/txn), this would cost as little as $625.
Watch Joe walk through the entire proposal and answer questions on YouTube.
This proposal is also available on the Dragonchain blog.

Hello Reddit and Ethereum community!

I’m Joe Roets, Founder & CEO of Dragonchain. When the team and I first heard about The Great Reddit Scaling Bake-Off we were intrigued. We believe we have the solutions Reddit seeks for its community points system and we have them at scale.
For your consideration, we have submitted our proposal below. The team at Dragonchain and I welcome and look forward to your technical questions, philosophical feedback, and fair criticism, to build a scaling solution for Reddit that will empower its users. Because our architecture is unlike other blockchain platforms out there today, we expect to receive many questions while people try to grasp our project. I will answer all questions here in this thread on Reddit, and I've answered some questions in the stream on YouTube.
We have seen good discussions so far in the competition. We hope that Reddit’s scaling solution will emerge from The Great Reddit Scaling Bake-Off and that Reddit will have great success with the implementation.

Executive summary

Dragonchain is a robust open source hybrid blockchain platform that has proven to withstand the passing of time since our inception in 2014. We have continued to evolve to harness the scalability of private nodes, yet take full advantage of the security of public decentralized networks, like Ethereum. We have a live, operational, and fully functional Interchain network integrating Bitcoin, Ethereum, Ethereum Classic, and ~700 independent Dragonchain nodes. Every transaction is secured to Ethereum, Bitcoin, and Ethereum Classic. Transactions are immediately usable on chain, and the first decentralization is seen within 20 seconds on Dragon Net. Security increases further to public networks ETH, BTC, and ETC within 10 minutes to 2 hours. Smart contracts can be written in any executable language, offering full freedom to existing developers. We invite any developer to watch the demo, play with our SDK’s, review open source code, and to help us move forward. Dragonchain specializes in scalable loyalty & rewards solutions and has built a decentralized social network on chain, with very affordable transaction costs. This experience can be combined with the insights Reddit and the Ethereum community have gained in the past couple of months to roll out the solution at a rapid pace.

Response and PoC

In The Great Reddit Scaling Bake-Off post, Reddit has asked for a series of demonstrations, requirements, and other considerations. In this section, we will attempt to answer all of these requests.

Live Demo

A live proof of concept showing hundreds of thousands of transactions
On Jan 7, 2020, Dragonchain hosted a 24-hour live demonstration during which a quarter of a billion (250 million+) transactions executed fully on an operational network. Every single transaction on Dragonchain is decentralized immediately through 5 levels of Dragon Net, and then secured with combined proof on Bitcoin, Ethereum, Ethereum Classic, and Binance Chain, via Interchain. This means that every single transaction is secured by, and traceable to these networks. An attack on this system would require a simultaneous attack on all of the Interchained networks.
24 hours in 4 minutes (YouTube):
24 hours in 4 minutes
The demonstration was of a single business system, and any user is able to scale this further, by running multiple systems simultaneously. Our goals for the event were to demonstrate a consistent capacity greater than that of Visa over an extended time period.
Tooling to reproduce our demo is available here:
https://github.com/dragonchain/spirit-bomb

Source Code

Source code (for on & off-chain components as well tooling used for the PoC). The source code does not have to be shared publicly, but if Reddit decides to use a particular solution it will need to be shared with Reddit at some point.

Scaling

How it works & scales

Architectural Scaling

Dragonchain’s architecture attacks the scalability issue from multiple angles. Dragonchain is a hybrid blockchain platform, wherein every transaction is protected on a business node to the requirements of that business or purpose. A business node may be held completely private or may be exposed or replicated to any level of exposure desired.
Every node has its own blockchain and is independently scalable. Dragonchain established Context Based Verification as its consensus model. Every transaction is immediately usable on a trust basis, and in time is provable to an increasing level of decentralized consensus. A transaction will have a level of decentralization to independently owned and deployed Dragonchain nodes (~700 nodes) within seconds, and full decentralization to BTC and ETH within minutes or hours. Level 5 nodes (Interchain nodes) function to secure all transactions to public or otherwise external chains such as Bitcoin and Ethereum. These nodes scale the system by aggregating multiple blocks into a single Interchain transaction on a cadence. This timing is configurable based upon average fees for each respective chain. For detailed information about Dragonchain’s architecture, and Context Based Verification, please refer to the Dragonchain Architecture Document.

Economic Scaling

An interesting feature of Dragonchain’s network consensus is its economics and scarcity model. Since Dragon Net nodes (L2-L4) are independent staking nodes, deployment to cloud platforms would allow any of these nodes to scale to take on a large percentage of the verification work. This is great for scalability, but not good for the economy, because there is no scarcity, and pricing would develop a downward spiral and result in fewer verification nodes. For this reason, Dragonchain uses TIME as scarcity.
TIME is calculated as the number of Dragons held, multiplied by the number of days held. TIME influences the user’s access to features within the Dragonchain ecosystem. It takes into account both the Dragon balance and length of time each Dragon is held. TIME is staked by users against every verification node and dictates how much of the transaction fees are awarded to each participating node for every block.
TIME also dictates the transaction fee itself for the business node. TIME is staked against a business node to set a deterministic transaction fee level (see transaction fee table below in Cost section). This is very interesting in a discussion about scaling because it guarantees independence for business implementation. No matter how much traffic appears on the entire network, a business is guaranteed to not see an increased transaction fee rate.

Scaled Deployment

Dragonchain uses Docker and Kubernetes to allow the use of best practices traditional system scaling. Dragonchain offers managed nodes with an easy to use web based console interface. The user may also deploy a Dragonchain node within their own datacenter or favorite cloud platform. Users have deployed Dragonchain nodes on-prem on Amazon AWS, Google Cloud, MS Azure, and other hosting platforms around the world. Any executable code, anything you can write, can be written into a smart contract. This flexibility is what allows us to say that developers with no blockchain experience can use any code language to access the benefits of blockchain. Customers have used NodeJS, Python, Java, and even BASH shell script to write smart contracts on Dragonchain.
With Docker containers, we achieve better separation of concerns, faster deployment, higher reliability, and lower response times.
We chose Kubernetes for its self-healing features, ability to run multiple services on one server, and its large and thriving development community. It is resilient, scalable, and automated. OpenFaaS allows us to package smart contracts as Docker images for easy deployment.
Contract deployment time is now bounded only by the size of the Docker image being deployed but remains fast even for reasonably large images. We also take advantage of Docker’s flexibility and its ability to support any language that can run on x86 architecture. Any image, public or private, can be run as a smart contract using Dragonchain.

Flexibility in Scaling

Dragonchain’s architecture considers interoperability and integration as key features. From inception, we had a goal to increase adoption via integration with real business use cases and traditional systems.
We envision the ability for Reddit, in the future, to be able to integrate alternate content storage platforms or other financial services along with the token.
  • LBRY - To allow users to deploy content natively to LBRY
  • MakerDAO to allow users to lend small amounts backed by their Reddit community points.
  • STORJ/SIA to allow decentralized on chain storage of portions of content. These integrations or any other are relatively easy to integrate on Dragonchain with an Interchain implementation.

Cost

Cost estimates (on-chain and off-chain) For the purpose of this proposal, we assume that all transactions are on chain (posts, replies, and votes).
On the Dragonchain network, transaction costs are deterministic/predictable. By staking TIME on the business node (as described above) Reddit can reduce transaction costs to as low as $0.0000025 per transaction.
Dragonchain Fees Table

Getting Started

How to run it
Building on Dragonchain is simple and requires no blockchain experience. Spin up a business node (L1) in our managed environment (AWS), run it in your own cloud environment, or on-prem in your own datacenter. Clear documentation will walk you through the steps of spinning up your first Dragonchain Level 1 Business node.
Getting started is easy...
  1. Download Dragonchain’s dctl
  2. Input three commands into a terminal
  3. Build an image
  4. Run it
More information can be found in our Get started documents.

Architecture
Dragonchain is an open source hybrid platform. Through Dragon Net, each chain combines the power of a public blockchain (like Ethereum) with the privacy of a private blockchain.
Dragonchain organizes its network into five separate levels. A Level 1, or business node, is a totally private blockchain only accessible through the use of public/private keypairs. All business logic, including smart contracts, can be executed on this node directly and added to the chain.
After creating a block, the Level 1 business node broadcasts a version stripped of sensitive private data to Dragon Net. Three Level 2 Validating nodes validate the transaction based on guidelines determined from the business. A Level 3 Diversity node checks that the level 2 nodes are from a diverse array of locations. A Level 4 Notary node, hosted by a KYC partner, then signs the validation record received from the Level 3 node. The transaction hash is ledgered to the Level 5 public chain to take advantage of the hash power of massive public networks.
Dragon Net can be thought of as a “blockchain of blockchains”, where every level is a complete private blockchain. Because an L1 can send to multiple nodes on a single level, proof of existence is distributed among many places in the network. Eventually, proof of existence reaches level 5 and is published on a public network.

API Documentation

APIs (on chain & off)

SDK Source

Nobody’s Perfect

Known issues or tradeoffs
  • Dragonchain is open source and even though the platform is easy enough for developers to code in any language they are comfortable with, we do not have so large a developer community as Ethereum. We would like to see the Ethereum developer community (and any other communities) become familiar with our SDK’s, our solutions, and our platform, to unlock the full potential of our Ethereum Interchain. Long ago we decided to prioritize both Bitcoin and Ethereum Interchains. We envision an ecosystem that encompasses different projects to give developers the ability to take full advantage of all the opportunities blockchain offers to create decentralized solutions not only for Reddit but for all of our current platforms and systems. We believe that together we will take the adoption of blockchain further. We currently have additional Interchain with Ethereum Classic. We look forward to Interchain with other blockchains in the future. We invite all blockchains projects who believe in decentralization and security to Interchain with Dragonchain.
  • While we only have 700 nodes compared to 8,000 Ethereum and 10,000 Bitcoin nodes. We harness those 18,000 nodes to scale to extremely high levels of security. See Dragonchain metrics.
  • Some may consider the centralization of Dragonchain’s business nodes as an issue at first glance, however, the model is by design to protect business data. We do not consider this a drawback as these nodes can make any, none, or all data public. Depending upon the implementation, every subreddit could have control of its own business node, for potential business and enterprise offerings, bringing new alternative revenue streams to Reddit.

Costs and resources

Summary of cost & resource information for both on-chain & off-chain components used in the PoC, as well as cost & resource estimates for further scaling. If your PoC is not on mainnet, make note of any mainnet caveats (such as congestion issues).
Every transaction on the PoC system had a transaction fee of $0.0001 (one-hundredth of a cent USD). At 256MM transactions, the demo cost $25,600. With current operational fees, the same demonstration would cost $640 USD.
For the demonstration, to achieve throughput to mimic a worldwide payments network, we modeled several clients in AWS and 4-5 business nodes to handle the traffic. The business nodes were tuned to handle higher throughput by adjusting memory and machine footprint on AWS. This flexibility is valuable to implementing a system such as envisioned by Reddit. Given that Reddit’s daily traffic (posts, replies, and votes) is less than half that of our demo, we would expect that the entire Reddit system could be handled on 2-5 business nodes using right-sized containers on AWS or similar environments.
Verification was accomplished on the operational Dragon Net network with over 700 independently owned verification nodes running around the world at no cost to the business other than paid transaction fees.

Requirements

Scaling

This PoC should scale to the numbers below with minimal costs (both on & off-chain). There should also be a clear path to supporting hundreds of millions of users.
Over a 5 day period, your scaling PoC should be able to handle:
*100,000 point claims (minting & distributing points) *25,000 subscriptions *75,000 one-off points burning *100,000 transfers
During Dragonchain’s 24 hour demo, the above required numbers were reached within the first few minutes.
Reddit’s total activity is 9000% more than Ethereum’s total transaction level. Even if you do not include votes, it is still 700% more than Ethereum’s current volume. Dragonchain has demonstrated that it can handle 250 million transactions a day, and it’s architecture allows for multiple systems to work at that level simultaneously. In our PoC, we demonstrate double the full capacity of Reddit, and every transaction was proven all the way to Bitcoin and Ethereum.
Reddit Scaling on Ethereum

Decentralization

Solutions should not depend on any single third-party provider. We prefer solutions that do not depend on specific entities such as Reddit or another provider, and solutions with no single point of control or failure in off-chain components but recognize there are numerous trade-offs to consider
Dragonchain’s architecture calls for a hybrid approach. Private business nodes hold the sensitive data while the validation and verification of transactions for the business are decentralized within seconds and secured to public blockchains within 10 minutes to 2 hours. Nodes could potentially be controlled by owners of individual subreddits for more organic decentralization.
  • Billing is currently centralized - there is a path to federation and decentralization of a scaled billing solution.
  • Operational multi-cloud
  • Operational on-premises capabilities
  • Operational deployment to any datacenter
  • Over 700 independent Community Verification Nodes with proof of ownership
  • Operational Interchain (Interoperable to Bitcoin, Ethereum, and Ethereum Classic, open to more)

Usability Scaling solutions should have a simple end user experience.

Users shouldn't have to maintain any extra state/proofs, regularly monitor activity, keep track of extra keys, or sign anything other than their normal transactions
Dragonchain and its customers have demonstrated extraordinary usability as a feature in many applications, where users do not need to know that the system is backed by a live blockchain. Lyceum is one of these examples, where the progress of academy courses is being tracked, and successful completion of courses is rewarded with certificates on chain. Our @Save_The_Tweet bot is popular on Twitter. When used with one of the following hashtags - #please, #blockchain, #ThankYou, or #eternalize the tweet is saved through Eternal to multiple blockchains. A proof report is available for future reference. Other examples in use are DEN, our decentralized social media platform, and our console, where users can track their node rewards, view their TIME, and operate a business node.
Examples:

Transactions complete in a reasonable amount of time (seconds or minutes, not hours or days)
All transactions are immediately usable on chain by the system. A transaction begins the path to decentralization at the conclusion of a 5-second block when it gets distributed across 5 separate community run nodes. Full decentralization occurs within 10 minutes to 2 hours depending on which interchain (Bitcoin, Ethereum, or Ethereum Classic) the transaction hits first. Within approximately 2 hours, the combined hash power of all interchained blockchains secures the transaction.

Free to use for end users (no gas fees, or fixed/minimal fees that Reddit can pay on their behalf)
With transaction pricing as low as $0.0000025 per transaction, it may be considered reasonable for Reddit to cover transaction fees for users.
All of Reddit's Transactions on Blockchain (month)
Community points can be earned by users and distributed directly to their Reddit account in batch (as per Reddit minting plan), and allow users to withdraw rewards to their Ethereum wallet whenever they wish. Withdrawal fees can be paid by either user or Reddit. This model has been operating inside the Dragonchain system since 2018, and many security and financial compliance features can be optionally added. We feel that this capability greatly enhances user experience because it is seamless to a regular user without cryptocurrency experience, yet flexible to a tech savvy user. With regard to currency or token transactions, these would occur on the Reddit network, verified to BTC and ETH. These transactions would incur the $0.0000025 transaction fee. To estimate this fee we use the monthly active Reddit users statista with a 60% adoption rate and an estimated 10 transactions per month average resulting in an approximate $720 cost across the system. Reddit could feasibly incur all associated internal network charges (mining/minting, transfer, burn) as these are very low and controllable fees.
Reddit Internal Token Transaction Fees

Reddit Ethereum Token Transaction Fees
When we consider further the Ethereum fees that might be incurred, we have a few choices for a solution.
  1. Offload all Ethereum transaction fees (user withdrawals) to interested users as they wish to withdraw tokens for external use or sale.
  2. Cover Ethereum transaction fees by aggregating them on a timed schedule. Users would request withdrawal (from Reddit or individual subreddits), and they would be transacted on the Ethereum network every hour (or some other schedule).
  3. In a combination of the above, customers could cover aggregated fees.
  4. Integrate with alternate Ethereum roll up solutions or other proposals to aggregate minting and distribution transactions onto Ethereum.

Bonus Points

Users should be able to view their balances & transactions via a blockchain explorer-style interface
From interfaces for users who have no knowledge of blockchain technology to users who are well versed in blockchain terms such as those present in a typical block explorer, a system powered by Dragonchain has flexibility on how to provide balances and transaction data to users. Transactions can be made viewable in an Eternal Proof Report, which displays raw data along with TIME staking information and traceability all the way to Bitcoin, Ethereum, and every other Interchained network. The report shows fields such as transaction ID, timestamp, block ID, multiple verifications, and Interchain proof. See example here.
Node payouts within the Dragonchain console are listed in chronological order and can be further seen in either Dragons or USD. See example here.
In our social media platform, Dragon Den, users can see, in real-time, their NRG and MTR balances. See example here.
A new influencer app powered by Dragonchain, Raiinmaker, breaks down data into a user friendly interface that shows coin portfolio, redeemed rewards, and social scores per campaign. See example here.

Exiting is fast & simple
Withdrawing funds on Dragonchain’s console requires three clicks, however, withdrawal scenarios with more enhanced security features per Reddit’s discretion are obtainable.

Interoperability Compatibility with third party apps (wallets/contracts/etc) is necessary.
Proven interoperability at scale that surpasses the required specifications. Our entire platform consists of interoperable blockchains connected to each other and traditional systems. APIs are well documented. Third party permissions are possible with a simple smart contract without the end user being aware. No need to learn any specialized proprietary language. Any code base (not subsets) is usable within a Docker container. Interoperable with any blockchain or traditional APIs. We’ve witnessed relatively complex systems built by engineers with no blockchain or cryptocurrency experience. We’ve also demonstrated the creation of smart contracts within minutes built with BASH shell and Node.js. Please see our source code and API documentation.

Scaling solutions should be extensible and allow third parties to build on top of it Open source and extensible
APIs should be well documented and stable

Documentation should be clear and complete
For full documentation, explore our docs, SDK’s, Github repo’s, architecture documents, original Disney documentation, and other links or resources provided in this proposal.

Third-party permissionless integrations should be possible & straightforward Smart contracts are Docker based, can be written in any language, use full language (not subsets), and can therefore be integrated with any system including traditional system APIs. Simple is better. Learning an uncommon or proprietary language should not be necessary.
Advanced knowledge of mathematics, cryptography, or L2 scaling should not be required. Compatibility with common utilities & toolchains is expected.
Dragonchain business nodes and smart contracts leverage Docker to allow the use of literally any language or executable code. No proprietary language is necessary. We’ve witnessed relatively complex systems built by engineers with no blockchain or cryptocurrency experience. We’ve also demonstrated the creation of smart contracts within minutes built with BASH shell and Node.js.

Bonus

Bonus Points: Show us how it works. Do you have an idea for a cool new use case for Community Points? Build it!

TIME

Community points could be awarded to Reddit users based upon TIME too, whereas the longer someone is part of a subreddit, the more community points someone naturally gained, even if not actively commenting or sharing new posts. A daily login could be required for these community points to be credited. This grants awards to readers too and incentivizes readers to create an account on Reddit if they browse the website often. This concept could also be leveraged to provide some level of reputation based upon duration and consistency of contribution to a community subreddit.

Dragon Den

Dragonchain has already built a social media platform that harnesses community involvement. Dragon Den is a decentralized community built on the Dragonchain blockchain platform. Dragon Den is Dragonchain’s answer to fake news, trolling, and censorship. It incentivizes the creation and evaluation of quality content within communities. It could be described as being a shareholder of a subreddit or Reddit in its entirety. The more your subreddit is thriving, the more rewarding it will be. Den is currently in a public beta and in active development, though the real token economy is not live yet. There are different tokens for various purposes. Two tokens are Lair Ownership Rights (LOR) and Lair Ownership Tokens (LOT). LOT is a non-fungible token for ownership of a specific Lair. LOT will only be created and converted from LOR.
Energy (NRG) and Matter (MTR) work jointly. Your MTR determines how much NRG you receive in a 24-hour period. Providing quality content, or evaluating content will earn MTR.

Security. Users have full ownership & control of their points.
All community points awarded based upon any type of activity or gift, are secured and provable to all Interchain networks (currently BTC, ETH, ETC). Users are free to spend and withdraw their points as they please, depending on the features Reddit wants to bring into production.

Balances and transactions cannot be forged, manipulated, or blocked by Reddit or anyone else
Users can withdraw their balance to their ERC20 wallet, directly through Reddit. Reddit can cover the fees on their behalf, or the user covers this with a portion of their balance.

Users should own their points and be able to get on-chain ERC20 tokens without permission from anyone else
Through our console users can withdraw their ERC20 rewards. This can be achieved on Reddit too. Here is a walkthrough of our console, though this does not show the quick withdrawal functionality, a user can withdraw at any time. https://www.youtube.com/watch?v=aNlTMxnfVHw

Points should be recoverable to on-chain ERC20 tokens even if all third-parties involved go offline
If necessary, signed transactions from the Reddit system (e.g. Reddit + Subreddit) can be sent to the Ethereum smart contract for minting.

A public, third-party review attesting to the soundness of the design should be available
To our knowledge, at least two large corporations, including a top 3 accounting firm, have conducted positive reviews. These reviews have never been made public, as Dragonchain did not pay or contract for these studies to be released.

Bonus points
Public, third-party implementation review available or in progress
See above

Compatibility with HSMs & hardware wallets
For the purpose of this proposal, all tokenization would be on the Ethereum network using standard token contracts and as such, would be able to leverage all hardware wallet and Ethereum ecosystem services.

Other Considerations

Minting/distributing tokens is not performed by Reddit directly
This operation can be automated by smart contract on Ethereum. Subreddits can if desired have a role to play.

One off point burning, as well as recurring, non-interactive point burning (for subreddit memberships) should be possible and scalable
This is possible and scalable with interaction between Dragonchain Reddit system and Ethereum token contract(s).

Fully open-source solutions are strongly preferred
Dragonchain is fully open source (see section on Disney release after conclusion).

Conclusion

Whether it is today, or in the future, we would like to work together to bring secure flexibility to the highest standards. It is our hope to be considered by Ethereum, Reddit, and other integrative solutions so we may further discuss the possibilities of implementation. In our public demonstration, 256 million transactions were handled in our operational network on chain in 24 hours, for the low cost of $25K, which if run today would cost $625. Dragonchain’s interoperable foundation provides the atmosphere necessary to implement a frictionless community points system. Thank you for your consideration of our proposal. We look forward to working with the community to make something great!

Disney Releases Blockchain Platform as Open Source

The team at Disney created the Disney Private Blockchain Platform. The system was a hybrid interoperable blockchain platform for ledgering and smart contract development geared toward solving problems with blockchain adoption and usability. All objective evaluation would consider the team’s output a success. We released a list of use cases that we explored in some capacity at Disney, and our input on blockchain standardization as part of our participation in the W3C Blockchain Community Group.
https://lists.w3.org/Archives/Public/public-blockchain/2016May/0052.html

Open Source

In 2016, Roets proposed to release the platform as open source to spread the technology outside of Disney, as others within the W3C group were interested in the solutions that had been created inside of Disney.
Following a long process, step by step, the team met requirements for release. Among the requirements, the team had to:
  • Obtain VP support and approval for the release
  • Verify ownership of the software to be released
  • Verify that no proprietary content would be released
  • Convince the organization that there was a value to the open source community
  • Convince the organization that there was a value to Disney
  • Offer the plan for ongoing maintenance of the project outside of Disney
  • Itemize competing projects
  • Verify no conflict of interest
  • Preferred license
  • Change the project name to not use the name Disney, any Disney character, or any other associated IP - proposed Dragonchain - approved
  • Obtain legal approval
  • Approval from corporate, parks, and other business units
  • Approval from multiple Disney patent groups Copyright holder defined by Disney (Disney Connected and Advanced Technologies)
  • Trademark searches conducted for the selected name Dragonchain
  • Obtain IT security approval
  • Manual review of OSS components conducted
  • OWASP Dependency and Vulnerability Check Conducted
  • Obtain technical (software) approval
  • Offer management, process, and financial plans for the maintenance of the project.
  • Meet list of items to be addressed before release
  • Remove all Disney project references and scripts
  • Create a public distribution list for email communications
  • Remove Roets’ direct and internal contact information
  • Create public Slack channel and move from Disney slack channels
  • Create proper labels for issue tracking
  • Rename internal private Github repository
  • Add informative description to Github page
  • Expand README.md with more specific information
  • Add information beyond current “Blockchains are Magic”
  • Add getting started sections and info on cloning/forking the project
  • Add installation details
  • Add uninstall process
  • Add unit, functional, and integration test information
  • Detail how to contribute and get involved
  • Describe the git workflow that the project will use
  • Move to public, non-Disney git repository (Github or Bitbucket)
  • Obtain Disney Open Source Committee approval for release
On top of meeting the above criteria, as part of the process, the maintainer of the project had to receive the codebase on their own personal email and create accounts for maintenance (e.g. Github) with non-Disney accounts. Given the fact that the project spanned multiple business units, Roets was individually responsible for its ongoing maintenance. Because of this, he proposed in the open source application to create a non-profit organization to hold the IP and maintain the project. This was approved by Disney.
The Disney Open Source Committee approved the application known as OSSRELEASE-10, and the code was released on October 2, 2016. Disney decided to not issue a press release.
Original OSSRELASE-10 document

Dragonchain Foundation

The Dragonchain Foundation was created on January 17, 2017. https://den.social/l/Dragonchain/24130078352e485d96d2125082151cf0/dragonchain-and-disney/
submitted by j0j0r0 to ethereum [link] [comments]

NVidia – Know What You Own

How many people really understand what they’re buying, especially when it comes to highly specialized hardware companies? Most NVidia investors seem to be relying on a vague idea of how the company should thrive “in the future”, as their GPUs are ostensibly used for Artificial Intelligence, Cloud, holograms, etc. Having been shocked by how this company is represented in the media, I decided to lay out how this business works, doing my part to fight for reality. With what’s been going on in markets, I don’t like my chances but here goes:
Let’s start with…
How does NVDA make money?
NVDA is in the business of semiconductor design. As a simplified image in your head, you can imagine this as designing very detailed and elaborate posters. Their engineers create circuit patterns for printing onto semiconductor wafers. NVDA then pays a semiconductor foundry (the printer – generally TSMC) to create chips with those patterns on them.
Simply put, NVDA’s profits represent the difference between the price at which they can sell those chips, less the cost of printing, and less the cost of paying their engineers to design them.
Notably, after the foundry prints the chips, NVDA also has to pay (I say pay, but really it is more like “sell at a discount to”) their “add-in board” (AIB) partners to stick the chips onto printed circuit boards (what you might imagine as green things with a bunch of capacitors on them). That leads to the final form in which buyers experience the GPU.
What is a GPU?
NVDA designs chips called GPUs (Graphical Processing Units). Initially, GPUs were used for the rapid processing and creation of images, but their use cases have expanded over time. You may be familiar with the CPU (Central Processing Unit). CPUs sit at the core of a computer system, doing most of the calculation, taking orders from the operating system (e.g. Windows, Linux), etc. AMD and Intel make CPUs. GPUs assist the CPU with certain tasks. You can think of the CPU as having a few giant very powerful engines. The GPU has a lot of small much less powerful engines. Sometimes you have to do a lot of really simple tasks that don’t require powerful engines to complete. Here, the act of engaging the powerful engines is a waste of time, as you end up spending most of your time revving them up and revving them down. In that scenario, it helps the CPU to hand that task over to the GPU in order to “accelerate” the completion of the task. The GPU only revs up a small engine for each task, and is able to rev up all the small engines simultaneously to knock out a large number of these simple tasks at the same time. Remember the GPU has lots of engines. The GPU also has an edge in interfacing a lot with memory but let’s not get too technical.
Who uses NVDA’s GPUs?
There are two main broad end markets for NVDA’s GPUs – Gaming and Professional. Let’s dig into each one:
The Gaming Market:
A Bit of Ancient History (Skip if impatient)
GPUs were first heavily used for gaming in arcades. They then made their way to consoles, and finally PCs. NVDA started out in the PC phase of GPU gaming usage. They weren’t the first company in the space, but they made several good moves that ultimately led to a very strong market position. Firstly, they focused on selling into OEMs – guys like the equivalent of today’s DELL/HP/Lenovo – , which allowed a small company to get access to a big market without having to create a lot of relationships. Secondly, they focused on the design aspect of the GPU, and relied on their Asian supply chain to print the chip, to package the chip and to install in on a printed circuit board – the Asian supply chain ended up being the best in semis. But the insight that really let NVDA dominate was noticing that some GPU manufacturers were focusing on keeping hardware-accelerated Transform and Lighting as a Professional GPU feature. As a start-up, with no professional GPU business to disrupt, NVidia decided their best ticket into the big leagues was blowing up the market by including this professional grade feature into their gaming product. It worked – and this was a real masterstroke – the visual and performance improvements were extraordinary. 3DFX, the initial leader in PC gaming GPUs, was vanquished, and importantly it happened when funding markets shut down with the tech bubble bursting and after 3DFX made some large ill-advised acquisitions. Consequently 3DFX, went from hero to zero, and NVDA bought them for a pittance out of bankruptcy, acquiring the best IP portfolio in the industry.
Some more Modern History
This is what NVDA’s pure gaming card revenue looks like over time – NVDA only really broke these out in 2005 (note by pure, this means ex-Tegra revenues):
📷 https://hyperinflation2020.tumblr.com/private/618394577731223552/tumblr_Ikb8g9Cu9sxh2ERno
So what is the history here? Well, back in the late 90s when GPUs were first invented, they were required to play any 3D game. As discussed in the early history above, NVDA landed a hit product to start with early and got a strong burst of growth: revenues of 160M in 1998 went to 1900M in 2002. But then NVDA ran into strong competition from ATI (later purchased and currently owned by AMD). While NVDA’s sales struggled to stay flat from 2002 to 2004, ATI’s doubled from 1Bn to 2Bn. NVDA’s next major win came in 2006, with the 8000 series. ATI was late with a competing product, and NVDA’s sales skyrocketed – as can be seen in the graph above. With ATI being acquired by AMD they were unfocused for some time, and NVDA was able to keep their lead for an extended period. Sales slowed in 2008/2009 but that was due to the GFC – people don’t buy expensive GPU hardware in recessions.
And then we got to 2010 and the tide changed. Growth in desktop PCs ended. Here is a chart from Statista:
📷https://hyperinflation2020.tumblr.com/private/618394674172919808/tumblr_OgCnNwTyqhMhAE9r9
This resulted in two negative secular trends for Nvidia. Firstly, with the decline in popularity of desktop PCs, growth in gaming GPUs faded as well (below is a chart from Jon Peddie). Note that NVDA sells discrete GPUs, aka DT (Desktop) Discrete. Integrated GPUs are mainly made by Intel (these sit on the motherboard or with the CPU).
📷 https://hyperinflation2020.tumblr.com/private/618394688079200256/tumblr_rTtKwOlHPIVUj8e7h
You can see from the chart above that discrete desktop GPU sales are fading faster than integrated GPU sales. This is the other secular trend hurting NVDA’s gaming business. Integrated GPUs are getting better and better, taking over a wider range of tasks that were previously the domain of the discrete GPU. Surprisingly, the most popular eSports game of recent times – Fortnite – only requires Intel HD 4000 graphics – an Integrated GPU from 2012!
So at this point you might go back to NVDA’s gaming sales, and ask the question: What happened in 2015? How is NVDA overcoming these secular trends?
The answer consists of a few parts.Firstly, AMD dropped the ball in 2015. As you can see in this chart, sourced from 3DCenter, AMD market share was halved in 2015, due to a particularly poor product line-up:
📷 https://hyperinflation2020.tumblr.com/private/618394753459994624/tumblr_J7vRw9y0QxMlfm6Xd
Following this, NVDA came out with Pascal in 2016 – a very powerful offering in the mid to high end part of the GPU market. At the same time, AMD was focusing on rebuilding and had no compelling mid or high end offerings. AMD mainly focused on maintaining scale in the very low end. Following that came 2017 and 2018: AMD’s offering was still very poor at the time, but cryptomining drove demand for GPUs to new levels, and AMD’s GPUs were more compelling from a price-performance standpoint for crypto mining initially, perversely leading to AMD gaining share. NVDA quickly remedied that by improving their drivers to better mine crypto, regaining their relative positioning, and profiting in a big way from the crypto boom. Supply that was calibrated to meet gaming demand collided with cryptomining demand and Average Selling Prices of GPUs shot through the roof. Cryptominers bought top of the line GPUs aggressively.
A good way to see changes in crypto demand for GPUs is the mining profitability of Ethereum:
📷 https://hyperinflation2020.tumblr.com/private/618394769378443264/tumblr_cmBtR9gm8T2NI9jmQ
This leads us to where we are today. 2019 saw gaming revenues drop for NVDA. Where are they likely to head?
The secular trends of falling desktop sales along with falling discrete GPU sales have reasserted themselves, as per the Jon Peddie research above. Cryptomining profitability has collapsed.
AMD has come out with a new architecture, NAVI, and the 5700XT – the first Iteration, competes effectively with NVDA in the mid-high end space on a price/performance basis. This is the first real competition from AMD since 2014.
NVDA can see all these trends, and they tried to respond. Firstly, with volumes clearly declining, and likely with a glut of second-hand GPUs that can make their way to gamers over time from the crypto space, NVDA decided to pursue a price over volume strategy. They released their most expensive set of GPUs by far in the latest Turing series. They added a new feature, Ray Tracing, by leveraging the Tensor Cores they had created for Professional uses, hoping to use that as justification for higher prices (more on this in the section on Professional GPUs). Unfortunately for NVDA, gamers have responded quite poorly to Ray Tracing – it caused performance issues, had poor support, poor adoption, and the visual improvements in most cases are not particularly noticeable or relevant.
The last recession led to gaming revenues falling 30%, despite NVDA being in a very strong position at the time vis-à-vis AMD – this time around their position is quickly slipping and it appears that the recession is going to be bigger. Additionally, the shift away from discrete GPUs in gaming continues.
To make matters worse for NVDA, AMD won the slots in both the New Xbox and the New PlayStation, coming out later this year. The performance of just the AMD GPU in those consoles looks to be competitive with NVidia products that currently retail for more than the entire console is likely to cost. Consider that usually you have to pair that NVidia GPU with a bunch of other expensive hardware. The pricing and margin impact of this console cycle on NVDA is likely to be very substantially negative.
It would be prudent to assume a greater than 30% fall in gaming revenues from the very elevated 2019 levels, with likely secular decline to follow.
The Professional Market:
A Bit of Ancient History (again, skip if impatient)
As it turns out, graphical accelerators were first used in the Professional market, long before they were employed for Gaming purposes. The big leader in the space was a company called Silicon Graphics, who sold workstations with custom silicon optimised for graphical processing. Their sales were only $25Mn in 1985, but by 1997 they were doing 3.6Bn in revenue – truly exponential growth. Unfortunately for them, from that point on, discrete GPUs took over, and their highly engineered, customised workstations looked exorbitantly expensive in comparison. Sales sank to 500mn by 2006 and, with no profits in sight, they ended up filing for bankruptcy in 2009. Competition is harsh in the semiconductor industry.
Initially, the Professional market centred on visualisation and design, but it has changed over time. There were a lot of players and lot of nuance, but I am going to focus on more recent times, as they are more relevant to NVidia.
Some More Modern History
NVDA’s Professional business started after its gaming business, but we don’t have revenue disclosures that show exactly when it became relevant. This is what we do have – going back to 2005:
📷 https://hyperinflation2020.tumblr.com/private/618394785029472256/tumblr_fEcYAzdstyh6tqIsI
In the beginning, Professional revenues were focused on the 3D visualisation end of the spectrum, with initial sales going into workstations that were edging out the customised builds made by Silicon Graphics. Fairly quickly, however, GPUs added more and more functionality and started to turn into general parallel data processors rather than being solely optimised towards graphical processing.
As this change took place, people in scientific computing noticed, and started using GPUs to accelerate scientific workloads that involve very parallel computation, such as matrix manipulation. This started at the workstation level, but by 2007 NVDA decided to make a new line-up of Tesla series cards specifically suited to scientific computing. The professional segment now have several points of focus:
  1. GPUs used in workstations for things such as CAD graphical processing (Quadro Line)
  2. GPUs used in workstations for computational workloads such as running engineering simulations (Quadro Line)
  3. GPUs used in workstations for machine learning applications (Quadro line.. but can use gaming cards as well for this)
  4. GPUs used by enterprise customers for high performance computing (such as modelling oil wells) (Tesla Line)
  5. GPUs used by enterprise customers for machine learning projects (Tesla Line)
  6. GPUs used by hyperscalers (mostly for machine learning projects) (Tesla Line)
In more recent times, given the expansion of the Tesla line, NVDA has broken up reporting into Professional Visualisation (Quadro Line) and Datacenter (Tesla Line). Here are the revenue splits since that reporting started:
📷 https://hyperinflation2020.tumblr.com/private/618394798232158208/tumblr_3AdufrCWUFwLgyQw2
📷 https://hyperinflation2020.tumblr.com/private/618394810632601600/tumblr_2jmajktuc0T78Juw7
It is worth stopping here and thinking about the huge increase in sales delivered by the Tesla line. The reason for this huge boom is the sudden increase in interest in numerical techniques for machine learning. Let’s go on a brief detour here to understand what machine learning is, because a lot of people want to hype it but not many want to tell you what it actually is. I have the misfortune of being very familiar with the industry, which prevented me from buying into the hype. Oops – sometimes it really sucks being educated.
What is Machine Learning?
At a very high level, machine learning is all about trying to get some sort of insight out of data. Most of the core techniques used in machine learning were developed a long time ago, in the 1950s and 1960s. The most common machine learning technique, which most people have heard of and may be vaguely familiar with, is called regression analysis. Regression analysis involves fitting a line through a bunch of datapoints. The most common type of regression analysis is called “Ordinary Least Squares” OLS regression, and that type of regression has a “closed form” solution, which means that there is a very simple calculation you can do to fit an OLS regression line to data.
As it happens, fitting a line through points is not only easy to do, it also tends to be the main machine learning technique that people want to use, because it is very intuitive. You can make good sense of what the data is telling you and can understand the machine learning model you are using. Obviously, regression analysis doesn’t require a GPU!
However, there is another consideration in machine learning: if you want to use a regression model, you still need a human to select the data that you want to fit the line through. Also, sometimes the relationship doesn’t look like a line, but rather it might look like a curve. In this case, you need a human to “transform” the data before you fit a line through it in order to make the relationship linear.
So people had another idea here: what if instead of getting a person to select the right data to analyse, and the right model to apply, you could just get a computer to do that? Of course the problem with that is that computers are really stupid. They have no preconceived notion of what data to use or what relationship would make sense, so what they do is TRY EVERYTHING! And everything involves trying a hell of a lot of stuff. And trying a hell of a lot of stuff, most of which is useless garbage, involves a huge amount of computation. People tried this for a while through to the 1980s, decided it was useless, and dropped it… until recently.
What changed? Well we have more data now, and we have a lot more computing power, so we figured lets have another go at it. As it happens, the premier technique for trying a hell of a lot of stuff (99.999% of which is garbage you throw away) is called “Deep Learning”. Deep learning is SUPER computationally intensive, and that computation happens to involve a lot of matrix multiplication. And guess what just happens to have been doing a lot of matrix multiplication? GPUs!
Here is a chart that, for obvious reasons, lines up extremely well with the boom in Tesla GPU sales:
📷 https://hyperinflation2020.tumblr.com/private/618394825774989312/tumblr_IZ3ayFDB0CsGdYVHW
Now we need to realise a few things here. Deep Learning is not some magic silver bullet. There are specific applications where it has proven very useful – primarily areas that have a very large number of very weak relationships between bits of data that sum up into strong relationships. An example of ones of those is Google Translate. On the other hand, in most analytical tasks, it is most useful to have an intuitive understanding of the data and to fit a simple and sensible model to it that is explainable. Deep learning models are not explainable in an intuitive manner. This is not only because they are complicated, but also because their scattershot technique of trying everything leaves a huge amount of garbage inside the model that cancels itself out when calculating the answer, but it is hard to see how it cancels itself out when stepping through it.
Given the quantum of hype on Deep learning and the space in general, many companies are using “Deep Learning”, “Machine Learning” and “AI” as marketing. Not many companies are actually generating significant amounts of tangible value from Deep Learning.
Back to the Competitive Picture
For the Tesla Segment
So NVDA happened to be in the right place at the right time to benefit from the Deep Learning hype. They happened to have a product ready to go and were able to charge a pretty penny for their product. But what happens as we proceed from here?
Firstly, it looks like the hype from Deep Learning has crested, which is not great from a future demand perspective. Not only that, but we really went from people having no GPUs, to people having GPUs. The next phase is people upgrading their old GPUs. It is much harder to sell an upgrade than to make the first sale.
Not only that, but GPUs are not the ideal manifestation of silicon for Deep Learning. NVDA themselves effectively admitted that with their latest iteration in the Datacentre, called Ampere. High Performance Computing, which was the initial use case for Tesla GPUs, was historically all about double precision floating point calculations (FP64). High precision calculations are required for simulations in aerospace/oil & gas/automotive.
NVDA basically sacrificed HPC and shifted further towards Deep Learning with Ampere, announced last Thursday. The FP64 performance of the A100 (the latest Ampere chip) increased a fairly pedestrian 24% from the V100, increasing from 7.8 to 9.7 TF. Not a surprise that NVDA lost El Capitan to AMD, given this shift away from a focus on HPC. Instead, NVDA jacked up their Tensor Cores (i.e. not the GPU cores) and focused very heavily on FP16 computation (a lot less precise than FP64). As it turns out, FP16 is precise enough for Deep Learning, and NVDA recognises that. The future industry standard is likely to be BFloat 16 – the format pioneered by Google, who lead in Deep Learning. Ampere now does 312 TF of BF16, which compares to the 420 TF of Google’s TPU V3 – Google’s Machine Learning specific processor. Not quite up to the 2018 board from Google, but getting better – if they cut out all of the Cuda cores and GPU functionality maybe they could get up to Google’s spec.
And indeed this is the problem for NVDA: when you make a GPU it has a large number of different use cases, and you provide a single product that meets all of these different use cases. That is a very hard thing to do, and explains why it has been difficult for competitors to muscle into the GPU space. On the other hand, when you are making a device that does one thing, such as deep learning, it is a much simpler thing to do. Google managed to do it with no GPU experience and is still ahead of NVDA. It is likely that Intel will be able to enter this space successfully, as they have widely signalled with the Xe.
There is of course the other large negative driver for Deep Learning, and that is the recession we are now in. Demand for GPU instances on Amazon has collapsed across the board, as evidenced by the fall in pricing. The below graph shows one example: this data is for renting out a single Tesla V100 GPU on AWS, which isthe typical thing to do in an early exploratory phase for a Deep Learning model:
📷 https://hyperinflation2020.tumblr.com/private/618396177958944768/tumblr_Q86inWdeCwgeakUvh
With Deep Learning not delivering near-term tangible results, it is the first thing being cut. On their most recent conference call, IBM noted weakness in their cognitive division (AI), and noted weaker sales of their power servers, which is the line that houses Enterprise GPU servers at IBM. Facebook cancelled their AI residencies for this year, and Google pushed theirs out. Even if NVDA can put in a good quarter due to their new product rollout (Ampere), the future is rapidly becoming a very stormy place.
For the Quadro segment
The Quadro segment has been a cash cow for a long time, generating dependable sales and solid margins. AMD just decided to rock the boat a bit. Sensing NVDA’s focus on Deep Learning, AMD seems to be focusing on HPC – the Radeon VII announced recently with a price point of $1899 takes aim at NVDAs most expensive Quadro, the GV100, priced at $8999. It does 6.5 TFLOPS of FP64 Double precision, whereas the GV100 does 7.4 – talk about shaking up a quiet segment.
Pulling things together
Let’s go back to what NVidia fundamentally does – paying their engineers to design chips, getting TSMC to print those chips, and getting board partners in Taiwan to turn them into the final product.
We have seen how a confluence of several pieces of extremely good fortune lined up to increase NVidia’s sales and profits tremendously: first on the Gaming side, weak competition from AMD until 2014, coupled with a great product in form of Pascal in 2016, followed by a huge crypto driven boom in 2017 and 2018, and on the Professional side, a sudden and unexpected increase in interest in Deep Learning driving Tesla demand from 2017-2019 sky high.
It is worth noting what these transient factors have done to margins. When unexpected good things happen to a chip company, sales go up a lot, but there are no costs associated with those sales. Strong demand means that you can sell each chip for a higher price, but no additional design work is required, and you still pay the printer, TSMC, the same amount of money. Consequently NVDA’s margins have gone up substantially: well above their 11.9% long term average to hit a peak of 33.2%, and more recently 26.5%:
📷 https://hyperinflation2020.tumblr.com/private/618396192166100992/tumblr_RiWaD0RLscq4midoP
The question is, what would be a sensible margin going forward? Obviously 33% operating margin would attract a wall of competition and get competed away, which is why they can only be temporary. However, NVidia has shifted to having a greater proportion of its sales coming from non-OEM, and has a greater proportion of its sales coming from Professional rather than gaming. As such, maybe one can be generous and say NVDA can earn an 18% average operating margin over the next cycle. We can sense check these margins, using Intel. Intel has a long term average EBIT margin of about 25%. Intel happens to actually print the chips as well, so they collect a bigger fraction of the final product that they sell. NVDA, since it only does the design aspect, can’t earn a higher EBIT margin than Intel on average over the long term.
Tesla sales have likely gone too far and will moderate from here – perhaps down to a still more than respectable $2bn per year. Gaming resumes the long-term slide in discrete GPUs, which will likely be replaced by integrated GPUs to a greater and greater extent over time. But let’s be generous and say it maintains $3.5 Bn Per year for the add in board, and let’s assume we keep getting $750mn odd of Nintendo Switch revenues(despite that product being past peak of cycle, with Nintendo themselves forecasting a sales decline). Let’s assume AMD struggles to make progress in Quadro, despite undercutting NVDA on price by 75%, with continued revenues at $1200. Add on the other 1.2Bn of Automotive, OEM and IP (I am not even counting the fact that car sales have collapsed and Automotive is likely to be down big), and we would end up with revenues of $8.65 Bn, at an average operating margin of 20% through the cycle that would have $1.75Bn of operating earnings power, and if I say that the recent Mellanox acquisition manages to earn enough to pay for all the interest on NVDAs debt, and I assume a tax rate of 15% we would have around $1.5Bn in Net income.
This company currently has a market capitalisation of $209 Bn. It blows my mind that it trades on 139x what I consider to be fairly generous earnings – earnings that NVidia never even got close to seeing before the confluence of good luck hit them. But what really stuns me is the fact that investors are actually willing to extrapolate this chain of unlikely and positive events into the future.
Shockingly, Intel has a market cap of 245Bn, only 40Bn more than NVDA, but Intel’s sales and profits are 7x higher. And while Intel is facing competition from AMD, it is much more likely to hold onto those sales and profits than NVDA is. These are absolutely stunning valuation disparities.
If I didn’t see NVDA’s price, and I started from first principles and tried to calculate a prudent price for the company I would have estimated a$1.5Bn normalised profit, maybe on a 20x multiple giving them the benefit of the doubt despite heading into a huge recession, and considering the fact that there is not much debt and the company is very well run. That would give you a market cap of $30Bn, and a share price of $49. And it is currently $339. Wow. Obviously I’m short here!
submitted by HyperInflation2020 to stocks [link] [comments]

Syscoin Platform’s Great Reddit Scaling Bake-off Proposal

Syscoin Platform’s Great Reddit Scaling Bake-off Proposal

https://preview.redd.it/rqt2dldyg8e51.jpg?width=1044&format=pjpg&auto=webp&s=777ae9d4fbbb54c3540682b72700fc4ba3de0a44
We are excited to participate and present Syscoin Platform's ideal characteristics and capabilities towards a well-rounded Reddit Community Points solution!
Our scaling solution for Reddit Community Points involves 2-way peg interoperability with Ethereum. This will provide a scalable token layer built specifically for speed and high volumes of simple value transfers at a very low cost, while providing sovereign ownership and onchain finality.
Token transfers scale by taking advantage of a globally sorting mempool that provides for probabilistically secure assumptions of “as good as settled”. The opportunity here for token receivers is to have an app-layer interactivity on the speed/security tradeoff (99.9999% assurance within 10 seconds). We call this Z-DAG, and it achieves high-throughput across a mesh network topology presently composed of about 2,000 geographically dispersed full-nodes. Similar to Bitcoin, however, these nodes are incentivized to run full-nodes for the benefit of network security, through a bonded validator scheme. These nodes do not participate in the consensus of transactions or block validation any differently than other nodes and therefore do not degrade the security model of Bitcoin’s validate first then trust, across every node. Each token transfer settles on-chain. The protocol follows Bitcoin core policies so it has adequate code coverage and protocol hardening to be qualified as production quality software. It shares a significant portion of Bitcoin’s own hashpower through merged-mining.
This platform as a whole can serve token microtransactions, larger settlements, and store-of-value in an ideal fashion, providing probabilistic scalability whilst remaining decentralized according to Bitcoin design. It is accessible to ERC-20 via a permissionless and trust-minimized bridge that works in both directions. The bridge and token platform are currently available on the Syscoin mainnet. This has been gaining recent attention for use by loyalty point programs and stablecoins such as Binance USD.

Solutions

Syscoin Foundation identified a few paths for Reddit to leverage this infrastructure, each with trade-offs. The first provides the most cost-savings and scaling benefits at some sacrifice of token autonomy. The second offers more preservation of autonomy with a more narrow scope of cost savings than the first option, but savings even so. The third introduces more complexity than the previous two yet provides the most overall benefits. We consider the third as most viable as it enables Reddit to benefit even while retaining existing smart contract functionality. We will focus on the third option, and include the first two for good measure.
  1. Distribution, burns and user-to-user transfers of Reddit Points are entirely carried out on the Syscoin network. This full-on approach to utilizing the Syscoin network provides the most scalability and transaction cost benefits of these scenarios. The tradeoff here is distribution and subscription handling likely migrating away from smart contracts into the application layer.
  2. The Reddit Community Points ecosystem can continue to use existing smart contracts as they are used today on the Ethereum mainchain. Users migrate a portion of their tokens to Syscoin, the scaling network, to gain much lower fees, scalability, and a proven base layer, without sacrificing sovereign ownership. They would use Syscoin for user-to-user transfers. Tips redeemable in ten seconds or less, a high-throughput relay network, and onchain settlement at a block target of 60 seconds.
  3. Integration between Matic Network and Syscoin Platform - similar to Syscoin’s current integration with Ethereum - will provide Reddit Community Points with EVM scalability (including the Memberships ERC777 operator) on the Matic side, and performant simple value transfers, robust decentralized security, and sovereign store-of-value on the Syscoin side. It’s “the best of both worlds”. The trade-off is more complex interoperability.

Syscoin + Matic Integration

Matic and Blockchain Foundry Inc, the public company formed by the founders of Syscoin, recently entered a partnership for joint research and business development initiatives. This is ideal for all parties as Matic Network and Syscoin Platform provide complementary utility. Syscoin offers characteristics for sovereign ownership and security based on Bitcoin’s time-tested model, and shares a significant portion of Bitcoin’s own hashpower. Syscoin’s focus is on secure and scalable simple value transfers, trust-minimized interoperability, and opt-in regulatory compliance for tokenized assets rather than scalability for smart contract execution. On the other hand, Matic Network can provide scalable EVM for smart contract execution. Reddit Community Points can benefit from both.
Syscoin + Matic integration is actively being explored by both teams, as it is helpful to Reddit, Ethereum, and the industry as a whole.

Proving Performance & Cost Savings

Our POC focuses on 100,000 on-chain settlements of token transfers on the Syscoin Core blockchain. Transfers and burns perform equally with Syscoin. For POCs related to smart contracts (subscriptions, etc), refer to the Matic Network proposal.
On-chain settlement of 100k transactions was accomplished within roughly twelve minutes, well-exceeding Reddit’s expectation of five days. This was performed using six full-nodes operating on compute-optimized AWS c4.2xlarge instances which were geographically distributed (Virginia, London, Sao Paulo Brazil, Oregon, Singapore, Germany). A higher quantity of settlements could be reached within the same time-frame with more broadcasting nodes involved, or using hosts with more resources for faster execution of the process.
Addresses used: 100,014
The demonstration was executed using this tool. The results can be seen in the following blocks:
612722: https://sys1.bcfn.ca/block/6d47796d043bb4c508d29123e6ae81b051f5e0aaef849f253c8f3a6942a022ce
612723: https://sys1.bcfn.ca/block/8e2077f743461b90f80b4bef502f564933a8e04de97972901f3d65cfadcf1faf
612724: https://sys1.bcfn.ca/block/205436d25b1b499fce44c29567c5c807beaca915b83cc9f3c35b0d76dbb11f6e
612725: https://sys1.bcfn.ca/block/776d1b1a0f90f655a6bbdf559ff5072459cbdc5682d7615ff4b78c00babdc237
612726: https://sys1.bcfn.ca/block/de4df0994253742a1ac8ac9eec8d2a8c8b0a6d72c53d6f3caa29bb6c171b0a6b
612727: https://sys1.bcfn.ca/block/e5e167c52a9decb313fbaadf49a5e34cb490f8084f642a850385476d4ef10d70
612728: https://sys1.bcfn.ca/block/ab64d989edc71890e7b5b8491c20e9a27520dc45a5f7c776d3dae79057f59fe7
612729: https://sys1.bcfn.ca/block/5e8b7ecd0e36f99d07e4ea6e135fc952bf7ec30164ab6f4d1e98b0f2d405df6d
612730: https://sys1.bcfn.ca/block/d395df3d31dde60bbb0bece6bd5b358297da878f0beb96be389e5f0e043580a3
It is important to note that this POC is not focused on Z-DAG. The performance of Z-DAG has been benchmarked within realistic network conditions: Whiteblock’s audit is publicly available. Network latency tests showed an average TPS around 15k with burst capacity up to 61k. Zero-latency control group exhibited ~150k TPS. Mainnet testing of the Z-DAG network is achievable and will require further coordination and additional resources.
Even further optimizations are expected in the upcoming Syscoin Core release which will implement a UTXO model for our token layer bringing further efficiency as well as open the door to additional scaling technology currently under research by our team and academic partners. At present our token layer is account-based, similar to Ethereum. Opt-in compliance structures will also be introduced soon which will offer some positive performance characteristics as well. It makes the most sense to implement these optimizations before performing another benchmark for Z-DAG, especially on the mainnet considering the resources required to stress-test this network.

Cost Savings

Total cost for these 100k transactions: $0.63 USD
See the live fee comparison for savings estimation between transactions on Ethereum and Syscoin. Below is a snapshot at time of writing:
ETH price: $318.55 ETH gas price: 55.00 Gwei ($0.37)
Syscoin price: $0.11
Snapshot of live fee comparison chart
Z-DAG provides a more efficient fee-market. A typical Z-DAG transaction costs 0.0000582 SYS. Tokens can be safely redeemed/re-spent within seconds or allowed to settle on-chain beforehand. The costs should remain about this low for microtransactions.
Syscoin will achieve further reduction of fees and even greater scalability with offchain payment channels for assets, with Z-DAG as a resilience fallback. New payment channel technology is one of the topics under research by the Syscoin development team with our academic partners at TU Delft. In line with the calculation in the Lightning Networks white paper, payment channels using assets with Syscoin Core will bring theoretical capacity for each person on Earth (7.8 billion) to have five on-chain transactions per year, per person, without requiring anyone to enter a fee market (aka “wait for a block”). This exceeds the minimum LN expectation of two transactions per person, per year; one to exist on-chain and one to settle aggregated value.

Tools, Infrastructure & Documentation

Syscoin Bridge

Mainnet Demonstration of Syscoin Bridge with the Basic Attention Token ERC-20
A two-way blockchain interoperability system that uses Simple Payment Verification to enable:
  • Any Standard ERC-20 token to be moved from Ethereum to the Syscoin blockchain as a Syscoin Platform Token (SPT), and back to Ethereum
  • Any SPT to be moved from Syscoin to the Ethereum blockchain as an ERC-20 token, and back to Syscoin

Benefits

  • Permissionless
  • No counterparties involved
  • No trading mechanisms involved
  • No third-party liquidity providers required
  • Cross-chain Fractional Supply - 2-way peg - Token supply maintained globally
  • ERC-20s gain vastly improved transactionality with the Syscoin Token Platform, along with the security of bitcoin-core-compliant PoW.
  • SPTs gain access to all the tooling, applications and capabilities of Ethereum for ERC-20, including smart contracts.
https://preview.redd.it/l8t2m8ldh8e51.png?width=1180&format=png&auto=webp&s=b0a955a0181746dc79aff718bd0bf607d3c3aa23
https://preview.redd.it/26htnxzfh8e51.png?width=1180&format=png&auto=webp&s=d0383d3c2ee836c9f60b57eca35542e9545f741d

Source code

https://github.com/syscoin/?q=sysethereum
Main Subprojects

API

Tools to simplify using Syscoin Bridge as a service with dapps and wallets will be released some time after implementation of Syscoin Core 4.2. These will be based upon the same processes which are automated in the current live Sysethereum Dapp that is functioning with the Syscoin mainnet.

Documentation

Syscoin Bridge & How it Works (description and process flow)
Superblock Validation Battles
HOWTO: Provision the Bridge for your ERC-20
HOWTO: Setup an Agent
Developer & User Diligence

Trade-off

The Syscoin Ethereum Bridge is secured by Agent nodes participating in a decentralized and incentivized model that involves roles of Superblock challengers and submitters. This model is open to participation. The benefits here are trust-minimization, permissionless-ness, and potentially less legal/regulatory red-tape than interop mechanisms that involve liquidity providers and/or trading mechanisms.
The trade-off is that due to the decentralized nature there are cross-chain settlement times of one hour to cross from Ethereum to Syscoin, and three hours to cross from Syscoin to Ethereum. We are exploring ways to reduce this time while maintaining decentralization via zkp. Even so, an “instant bridge” experience could be provided by means of a third-party liquidity mechanism. That option exists but is not required for bridge functionality today. Typically bridges are used with batch value, not with high frequencies of smaller values, and generally it is advantageous to keep some value on both chains for maximum availability of utility. Even so, the cross-chain settlement time is good to mention here.

Cost

Ethereum -> Syscoin: Matic or Ethereum transaction fee for bridge contract interaction, negligible Syscoin transaction fee for minting tokens
Syscoin -> Ethereum: Negligible Syscoin transaction fee for burning tokens, 0.01% transaction fee paid to Bridge Agent in the form of the ERC-20, Matic or Ethereum transaction fee for contract interaction.

Z-DAG

Zero-Confirmation Directed Acyclic Graph is an instant settlement protocol that is used as a complementary system to proof-of-work (PoW) in the confirmation of Syscoin service transactions. In essence, a Z-DAG is simply a directed acyclic graph (DAG) where validating nodes verify the sequential ordering of transactions that are received in their memory pools. Z-DAG is used by the validating nodes across the network to ensure that there is absolute consensus on the ordering of transactions and no balances are overflowed (no double-spends).

Benefits

  • Unique fee-market that is more efficient for microtransaction redemption and settlement
  • Uses decentralized means to enable tokens with value transfer scalability that is comparable or exceeds that of credit card networks
  • Provides high throughput and secure fulfillment even if blocks are full
  • Probabilistic and interactive
  • 99.9999% security assurance within 10 seconds
  • Can serve payment channels as a resilience fallback that is faster and lower-cost than falling-back directly to a blockchain
  • Each Z-DAG transaction also settles onchain through Syscoin Core at 60-second block target using SHA-256 Proof of Work consensus
https://preview.redd.it/pgbx84jih8e51.png?width=1614&format=png&auto=webp&s=5f631d42a33dc698365eb8dd184b6d442def6640

Source code

https://github.com/syscoin/syscoin

API

Syscoin-js provides tooling for all Syscoin Core RPCs including interactivity with Z-DAG.

Documentation

Z-DAG White Paper
Useful read: An in-depth Z-DAG discussion between Syscoin Core developer Jag Sidhu and Brave Software Research Engineer Gonçalo Pestana

Trade-off

Z-DAG enables the ideal speed/security tradeoff to be determined per use-case in the application layer. It minimizes the sacrifice required to accept and redeem fast transfers/payments while providing more-than-ample security for microtransactions. This is supported on the premise that a Reddit user receiving points does need security yet generally doesn’t want nor need to wait for the same level of security as a nation-state settling an international trade debt. In any case, each Z-DAG transaction settles onchain at a block target of 60 seconds.

Syscoin Specs

Syscoin 3.0 White Paper
(4.0 white paper is pending. For improved scalability and less blockchain bloat, some features of v3 no longer exist in current v4: Specifically Marketplace Offers, Aliases, Escrow, Certificates, Pruning, Encrypted Messaging)
  • 16MB block bandwidth per minute assuming segwit witness carrying transactions, and transactions ~200 bytes on average
  • SHA256 merge mined with Bitcoin
  • UTXO asset layer, with base Syscoin layer sharing identical security policies as Bitcoin Core
  • Z-DAG on asset layer, bridge to Ethereum on asset layer
  • On-chain scaling with prospect of enabling enterprise grade reliable trustless payment processing with on/offchain hybrid solution
  • Focus only on Simple Value Transfers. MVP of blockchain consensus footprint is balances and ownership of them. Everything else can reduce data availability in exchange for scale (Ethereum 2.0 model). We leave that to other designs, we focus on transfers.
  • Future integrations of MAST/Taproot to get more complex value transfers without trading off trustlessness or decentralization.
  • Zero-knowledge Proofs are a cryptographic new frontier. We are dabbling here to generalize the concept of bridging and also verify the state of a chain efficiently. We also apply it in our Digital Identity projects at Blockchain Foundry (a publicly traded company which develops Syscoin softwares for clients). We are also looking to integrate privacy preserving payment channels for off-chain payments through zkSNARK hub & spoke design which does not suffer from the HTLC attack vectors evident on LN. Much of the issues plaguing Lightning Network can be resolved using a zkSNARK design whilst also providing the ability to do a multi-asset payment channel system. Currently we found a showstopper attack (American Call Option) on LN if we were to use multiple-assets. This would not exist in a system such as this.

Wallets

Web3 and mobile wallets are under active development by Blockchain Foundry Inc as WebAssembly applications and expected for release not long after mainnet deployment of Syscoin Core 4.2. Both of these will be multi-coin wallets that support Syscoin, SPTs, Ethereum, and ERC-20 tokens. The Web3 wallet will provide functionality similar to Metamask.
Syscoin Platform and tokens are already integrated with Blockbook. Custom hardware wallet support currently exists via ElectrumSys. First-class HW wallet integration through apps such as Ledger Live will exist after 4.2.
Current supported wallets
Syscoin Spark Desktop
Syscoin-Qt

Explorers

Mainnet: https://sys1.bcfn.ca (Blockbook)
Testnet: https://explorer-testnet.blockchainfoundry.co

Thank you for close consideration of our proposal. We look forward to feedback, and to working with the Reddit community to implement an ideal solution using Syscoin Platform!

submitted by sidhujag to ethereum [link] [comments]

Amazing AMA from Douglas Horn

AMA Recap telos Foundation with Crypto Hunters
On August 02, 2020 at 12:00 WIB Indonesia Time / August 01 2020 at 10:00 PM ( PST ) in the Crypto Hunter Telegram Group, AMA TELOS started with Mr.Douglas as guest speaker and Gus Fahlev from Crypto Hunters as moderator. When campaigning, 10 lucky AMA participants when asking questions on Google forms and AMA sessions will get a total TELOS ( TLOS ) prize of $100.
The following is a summary of AMA questions and answers announced by the moderator and
Segment 1
Question 1: Can you explain us, what is Telos?
Answer: Telos is a blockchain platform for smart contracts. It is a low latency—a new block every half second, high capacity—currently in the top 2 blockchains in transactions per day, according to Blocktivity.info, and no transaction fee blockchain. Telos also has many unique features that allow developers to make better, dapps, such as our Telos Decide governance engine.
Question 2: what ecosystem is used by telos?
Answer: Telos is its own Layer-1 blockchain, not a token on another blockchain. The technology behind Telos is EOSIO, the same technology used by EOS and WAX, for example.
Question 3: I see that Telos uses EOSIO platform, what are the very significant advantages that distinguish Telos from other projects?
Answer: Telos uses the EOSIO platform but we have built several additional tools. Some of these add more security and resiliency to the blockchain, such as testing block producers and removing non-performant ones, but most are related to development. Telos provides attractive development tools that aren’t available elsewhere. Telos Decide is a governance platform that lets any group create self-governance tools easily. These run on Telos at very little cost and can provide all kinds of voting, elections, initiative ballots, committee management and funds allocation. Telos also has Telos EVM, an Ethereum virtual machine that can run Ethereum Solidity contracts at hundreds of times the speed of Ethereum and with no costs. Another Telos technology that is deploying soon is dStor, which is a decentralized cloud storage system associated with Telos so that dapps can store files controlled by blockchain contracts.
Question 4: At what stage is Teloa Road Map now? what are the latest updates currently being realized?
Answer: Telos launched its mainnet in December 2018 and has so far produced over 100,000,000 blocks without ever stopping or rolling back the chain. This is likely a record for a public blockchain. We have an ongoing group Telos Core Developers who build and maintain the code and are paid by our Telos Works funding system that is voted by the Telos token holders. Telos is a leader in blockchain governance and regularly amends its governance rules based on smart contract powered voting called Telos Amend. You can see the current Telos governance rules as stored live on the blockchain at tbnoa.org.
The most recent updates were adding new features to Telos Decide to make it more powerful, implementing EOSIO v2.0 which increased the capacity of Telos about 8-10 times what it previously was, and implementing Telos EVM on our Testnet.
We are currently working on better interfaces for Telos Decide voting, and building more infrastructure around Telos EVM so that it is ready to deploy on our mainnet.
Question 5: Is telos currently available on an exchange? and is it ready to be traded?
Answer: Telos has been trading on exchanges for over a year. The largest exchanges are Probit, CoinTiger, CoinLim, and P2PB2B. Other exchanges include Newdex and Alcor. We expect to be listed on larger exchanges in the near future.
Question 6: Now is the time when defi tokens begin to develop, can telos be categorized as a defi project? and what strategies for this year and in the years to come prepared by telos?
Answer: Telos is a smart contract platform, but it already has many DeFi tools built for it including REX staking rewards with a current yield of ~19% APR, smart contract controlled token swaps (like Bancor) with no counterparty called Telos Swaps, a common liquidity pool/order book shared by multiple DEXs to improve liquidity called EvolutionDEX. Wrapped BTC, ETH, XRP, EOS, and other tokens can be brought to Telos and exchanged or used via smart contracts through Transledger. We have more DeFi tools coming all the time including two new offerings in the next few weeks that will be the first of their kind.
Question 7: Governance is an important topic in blockchain and Telos is considered a leader in this area. Why is that?
Answer: Telos is among the top blockchain projects in terms of how it empowers its users to guide the growth of the chain—along the likes of Tezos or new DeFi tokens that offer governance coins. Telos users continuously elect the validating nodes, called Block Producers, that operate the network based on a set of governance documents such as the Telos Blockchain Network Operating Agreement (TBNOA). These are all stored entirely on-chain (viewable at tbnoa.org) and can be modified by smart contract through blockchain voting using Telos Amend. You can see examples of this at https://chainspector.io/governance/ratify-proposals Telos also has a robust user-voted funding mechanism called Telos Works that has funded many projects and is one of the more successful blockchain incubators around. Voting for all of these can be done in a number of ways including block explorers, wallets like Sqrl (desktop) and Telos Wallet (mobile), telos.net and Chainspector (https://chainspector.io/governance/telos-works). But Telos goes beyond any other chain-level governance by making all of these features and more available to any dapp on Telos through Telos Decide governance engine, making it easy for any dapp or DAO to add robust, highly customized voting.
Segment 2 from google form
Question 1: Defi projects are now trending whether telos will also go to Defi projects, to increase investors or the community?
Answer: Yes, we have several DeFi tools on Telos that can work together:
Telos Swaps is an automated, zero-counterparty token swapping smart contract where you can exchange any Telos tokens you may want at any time.
Telos has DEXs and uses a common order book called EvolutionDEX that's available to any DEX so that a buy order on one can be matched against a sell order on another. This greatly increases liquidity for traders.
We have staking rewards though the Resource EXchange (REX) with rewards currently at about 19% APR.
We also have "wrapped" BTC, ETH, and other tokens that can be traded on Telos or used by its smart contracts at half-second transaction times with no transaction fees. This makes Telos a Bitcoin or Ethereum second layer or state channel that's much faster even than Lightning Network and has no fees once the BTC has been brought to Telos.
Question 2: Telos aim is to build a new global economy could you explain how whole ecosystem works? There are already many centralized competitors so what is decentralization aspect in telos?
Answer: Telos is one of the most decentralized blockchain's in the world. It is operated by 51 validators (block producers) who validate blocks in any month. These are voted for on an ongoing basis by Telos account holders.
Telos is also economically decentralized with no large whales like Bitcoin, Ethereum, XRP or EOS because Telos never performed an ICO and limited the size of genesis accounts to 40,000 TLOS max.
Telos is also geographically decentralized with users and block producers on every continent but Antarctica and in numerous countries. The is a large amount in North America and Western Europe, but also in Asia, Australia, and large contingents in Latin America and Africa. Telos has had a Block Producer in Indonesia since the beginning and some dapps on Telos are based in Indonesia as well, like SEEDS, for example.
Question 3: Most investors focus only on the token price in the short term instead of the real value of the project.
Can #TELOS tell me the benefits for investors holding #TELOS the long term?
Answer: That's true about crypto speculators and traders, certainly. Traders are usually looking for coins with good positive momentuum that they hope will continue. But these are often pump and dumps where a few people get in early, pump the price, and then get out at the expense of new investors. That's very unfortunate. Telos isn't like this. One reason is that there aren't large whales who can easily manipulate the price.
Telos seems to be greatly undervalued compared to its peers. Telos has capacity like EOS and well above XRP, XML, Tron, Ethereum. But its value is miniscule relative to these. Telos is a leader in blockchain governance like Tezos but its marketcap is tiny in comparison. Telos onboarded 100,000 new accounts last month and is appearing in the leading crypto press every week with new dapps or developments. So there's some disconnect between the value of Telos and the price. In my experience, these tend to equalize once more people learn about a project.
Question 4: Eos Problems and How Telos Will Solve Them?
Answer: Telos originally set out to solve problems with EOS. It was successful in this and now Telos stands on it's own and our roadmap is more about empowering users. In short, these are some of the EOS problems we already solved:
RAM speculation - Telos had a plan to reduce RAM speculation through a published guidance price that has been extremely successful. The RAM price is guided by market forces but has remained within 10% of the guidance price since launch.
CPU resources - Telos implemented the Telos Resource Improved Management Plan many months ago which was a 7-point approach to making EIDOS-type resource mining unprofitable on Telos. It has largely been successful and Telos has not experienced any resource shortages.
Exchange Collusion/Voting - Telos governance does not permit Exchanges to vote with user tokens. This prevent voting situations seen on EOS or STEEM.
Block Producer collusion - Telos has minimum requirements for block producers and do not allow anyone to own more than one block producer. Those who are found doing so (there have been about 3 cases so far) have been removed and sanctioned in accordance with the rules of the TBNOA.
Question 5: What ecosystems do telos use? and why telos prefers to use EOS network over BEP2 or ERC20? what layer is used telos, can you please explain?
Answer: uses the EOSIO protocol because it is the fastest and most powerful in the world and it also receives the fastest upgrades and ongoing development compared to other blockchain technologies. EOS and WAX also use the EOSIO protocol but they are completely different chains.
Telos is a Layer 1 protocol, meaning that it is its own blockchain that other dapps and smart contracts deploy upon.
One thing that happens when a blockchain like Telos has much, much higher speed and capacity than others like Bitcoin or Ethereum is that Telos can actually run those other blockchains better on its own platform than they can natively. For example, a number of tokens can come in to Telos as wrapped tokens. BTC, ETH, XRP are all current examples of tokens that can be on Telos as wrapped tokens. Once there, these can all be moved around with half-second transaction times and no transaction fees, so they are a better second layer for Bitcoin or Ethereum than Lightning Network or Loom.
Telos can also emulate other chains, which we are doing using Telos EVM which emulates the Ethereum Virtual Machine at about 300 times faster and with no gas fees or congestion compared to Ethereum native deployment. Telos can run Ethereum (Solidity) smart contracts without any changes required. Telos EVM is already deployed on the Telos Testnet and will move to our mainnet soon. So anyone who wants to run ERC-20 tokens on Telos can do so easily and they will be faster and with much less cost than running the same contract on Ethereum.
Segment 3 free asking
Question: I am happy to see new things created by the Telos team. Like What concept did you build in 2020 to make Telos superior?
Answer: Currently, I think Telos Decide is the most unique and powerful feature we have built. There are all kinds of organizations that need to vote. Apartment buildings, school boards, unions, tribes, youth sports leagues, city councils. Voting is hard, time consuming, and expensive for many. Telos Decide makes voting easy, convenient, and transparent. That will be a major improvement and disrupt old style voting. It also goes for buisnesses and corporate governance. Even before COVID it was important, but now people can't really gather in one place so fraud-proof voting is very important. No one has the tools that Telos has. And if they try to copy us, well, we are already way out ahead working on the next features.
Question: If we look about partnerships, Telos has many partnership ! so what's the importance of that partnership for Telos? And How will you protect the value of Telos to your partners or investors ??
Answer: Many of the partnerships are dapps that have decided to deploy on Telos and receive some level of help from the TCD or Telos Foundation to do so. Once a dapp deploys on a chain, it really is like a long term partnership.
Many dapps will become block producers as well and join in the governance of Telos. I suspect that in a few years, most block producers will be the large dapps on the platform with just a few remaining like my company GoodBlock. Of course, we will have our own apps out as well so I guess we'll be developers too.
Telos is very fiscally responsible for investors. We spend little. There has not been any actual inflation on the chain in almost a year. (The token supply has remained unchanged at about 355M TLOS) we are actively working with dapps to bring more to Telos and exchanges and other services like fiat on- and off-ramps to increase value for users.
Question: In challenging crypto market condition any project is really difficult to survive and we are witnessing that there are many platforms . What is telos project plan for surviving in this long blockchain marathon? In this plan, what motivates long term investors and believers?
Answer: True.
While we currently have a low token price, Telos as a DPOS chain can be maintained and grow without a massive army of miners and still maintain BFT.
But the risk is really not whether Telos can continue. Already there are enough dapps that if the block producers went away somehow (not gonna happen) the dapps would just run the chain themselves.
But with 100,000 new users last month and new dapps all the time, we are looking to join the top 5 dapp platforms on DappRadar soon. Survival as a project is not in question.
One of the big reasons is that we never did any ICO and Telos is not a company. So regulatory risks aren't there and there's no company to go bankrupt or fail. We have already developed a bootstrapped system to pay block producers and core developers. So we aren't like a company that will run out of runway sometime.
Question: Could you explain what is DSTOR? What will it contribute to your ecosystem?
Answer: dStor is a decentralized cloud storage system that will have the performance of AWS or Azure with much lower costs and true decentralization. It's based on a highly modified version of IPFS that we have applied for patents for our implementation. It means that dapps will be able to store data like files, images, sound, etc. in a decentralized way.
Question: Trust and security is very important in any business , what makes investors , customer and users safe secure when working with TELOS??
Answer: Telos is decentralized in a way that's more like bitcoin than other blockchains (but without the whales who can manipulate price). There was never any single company that started Telos, so there's no company whose CEO could make decisions for the network. There are numerous block producers who decide on any operational issue that isn't clearly described in the TBNOA governance documents. And to get to an action, 15 of the 21 currently active BPs need to sign a multisig transaction. So that's a high threshold. But also, the TBNOA speaks to a large number of issues and so the BPs can't just make up their own rules.
Since there are really no whales, no one can vote in any kind of change or bring in their own BPs with their votes. This is also very different from other chains where there are whales. Telos is not located in any one country, so our rules can't be driven by one nation's politics.
All in all, this level of decentralization sets Telos apart from almost any blockchain project in existence. People don't have to trust Telos because the system is designed to make trust unnecessary.
submitted by TelosNetwork to TELOS [link] [comments]

DFINITY Research Report

DFINITY Research Report
Author: Gamals Ahmed, CoinEx Business Ambassador
ABSTRACT
The DFINITY blockchain computer provides a secure, performant and flexible consensus mechanism. At its core, DFINITY contains a decentralized randomness beacon, which acts as a verifiable random function (VRF) that produces a stream of outputs over time. The novel technique behind the beacon relies on the existence of a unique-deterministic, non-interactive, DKG-friendly threshold signatures scheme. The only known examples of such a scheme are pairing-based and derived from BLS.
The DFINITY blockchain is layered on top of the DFINITY beacon and uses the beacon as its source of randomness for leader selection and leader ranking. A “weight” is attributed to a chain based on the ranks of the leaders who propose the blocks in the chain, and that weight is used to select between competing chains. The DFINITY blockchain is layered on top of the DFINITY beacon and uses the beacon as its source of randomness for leader selection and leader ranking blockchain is further hardened by a notarization process which dramatically improves the time to finality and eliminates the nothing-at-stake and selfish mining attacks.
DFINITY consensus algorithm is made to scale through continuous quorum selections driven by the random beacon. In practice, DFINITY achieves block times of a few seconds and transaction finality after only two confirmations. The system gracefully handles temporary losses of network synchrony including network splits, while it is provably secure under synchrony.

1.INTRODUCTION

DFINITY is building a new kind of public decentralized cloud computing resource. The company’s platform uses blockchain technology which is aimed at building a new kind of public decentralized cloud computing resource with unlimited capacity, performance and algorithmic governance shared by the world, with the capability to power autonomous self-updating software systems, enabling organizations to design and deploy custom-tailored cloud computing projects, thereby reducing enterprise IT system costs by 90%.
DFINITY aims to explore new territory and prove that the blockchain opportunity is far broader and deeper than anyone has hitherto realized, unlocking the opportunity with powerful new crypto.
Although a standalone project, DFINITY is not maximalist minded and is a great supporter of Ethereum.
The DFINITY blockchain computer provides a secure, performant and flexible consensus mechanism. At its core, DFINITY contains a decentralized randomness beacon, which acts as a verifiable random function (VRF) that produces a stream of outputs over time. The novel technique behind the beacon relies on the existence of a unique-deterministic, non-interactive, DKG-friendly threshold signatures scheme. The only known examples of such a scheme are pairing-based and derived from BLS.
DFINITY’s consensus mechanism has four layers: notary (provides fast finality guarantees to clients and external observers), blockchain (builds a blockchain from validated transactions via the Probabilistic Slot Protocol driven by the random beacon), random beacon (provides the source of randomness for all higher layers like smart contract applications), and identity (provides a registry of all clients).
DFINITY’s consensus mechanism has four layers

Figure1: DFINITY’s consensus mechanism layers
1. Identity layer:
Active participants in the DFINITY Network are called clients. Where clients are registered with permanent identities under a pseudonym. Moreover, DFINITY supports open membership by providing a protocol for registering new clients by depositing a stake with an insurance period. This is the responsibility of the first layer.
2. Random Beacon layer:
Provides the source of randomness (VRF) for all higher layers including ap- plications (smart contracts). The random beacon in the second layer is an unbiasable, verifiable random function (VRF) that is produced jointly by registered clients. Each random output of the VRF is unpredictable by anyone until just before it becomes avail- able to everyone. This is a key technology of the DFINITY system, which relies on a threshold signature scheme with the properties of uniqueness and non-interactivity.

https://preview.redd.it/hkcf53ic05e51.jpg?width=441&format=pjpg&auto=webp&s=44d45c9602ee630705ce92902b8a8379201d8111
3. Blockchain layer:
The third layer deploys the “probabilistic slot protocol” (PSP). This protocol ranks the clients for each height of the chain, in an order that is derived determin- istically from the unbiased output of the random beacon for that height. A weight is then assigned to block proposals based on the proposer’s rank such that blocks from clients at the top of the list receive a higher weight. Forks are resolved by giving favor to the “heaviest” chain in terms of accumulated block weight — quite sim- ilar to how traditional proof-of-work consensus is based on the highest accumulated amount of work.
The first advantage of the PSP protocol is that the ranking is available instantaneously, which allows for a predictable, constant block time. The second advantage is that there is always a single highest-ranked client, which allows for a homogenous network bandwidth utilization. Instead, a race between clients would favor a usage in bursts.
4. Notarization layer:
Provides fast finality guarantees to clients and external observers. DFINITY deploys the novel technique of block notarization in its fourth layer to speed up finality. A notarization is a threshold signature under a block created jointly by registered clients. Only notarized blocks can be included in a chain. Of all RSA-based alternatives exist but suffer from an impracticality of setting up the thresh- old keys without a trusted dealer.
DFINITY achieves its high speed and short block times exactly because notarization is not full consensus.
DFINITY does not suffer from selfish mining attack or a problem nothing at stake because the authentication step is impossible for the opponent to build and maintain a series of linked and trusted blocks in secret.
DFINITY’s consensus is designed to operate on a network of millions of clients. To en- able scalability to this extent, the random beacon and notarization protocols are designed such as that they can be safely and efficiently delegated to a committee

1.1 OVERVIEW ABOUT DFINITY

DFINITY is a blockchain-based cloud-computing project that aims to develop an open, public network, referred to as the “internet computer,” to host the next generation of software and data. and it is a decentralized and non-proprietary network to run the next generation of mega-applications. It dubbed this public network “Cloud 3.0”.
DFINITY is a third generation virtual blockchain network that sets out to function as an “intelligent decentralised cloud,”¹ strongly focused on delivering a viable corporate cloud solution. The DFINITY project is overseen, supported and promoted by DFINITY Stiftung a not-for-profit foundation based in Zug, Switzerland.
DFINITY is a decentralized network design whose protocols generate a reliable “virtual blockchain computer” running on top of a peer-to-peer network upon which software can be installed and can operate in the tamperproof mode of smart contracts.
DFINITY introduces algorithmic governance in the form of a “Blockchain Nervous System” that can protect users from attacks and help restart broken systems, dynamically optimize network security and efficiency, upgrade the protocol and mitigate misuse of the platform, for example by those wishing to run illegal or immoral systems.
DFINITY is an Ethereum-compatible smart contract platform that is implementing some revolutionary ideas to address blockchain performance, scaling, and governance. Whereas
DFINITY could pose a credible threat to Ethereum’s extinction, the project is pursuing a coevolutionary strategy by contributing funding and effort to Ethereum projects and freely offering their technology to Ethereum for adoption. DFINITY has labeled itself Ethereum’s “crazy sister” to express it’s close genetic resemblance to Ethereum, differentiated by its obsession with performance and neuron-inspired governance model.
Dfinity raised $61 million from Andreesen Horowitz and Polychain Capital in a February 2018 funding round. At the time, Dfinity said it wanted to create an “internet computer” to cut the costs of running cloud-based business applications. A further $102 million funding round in August 2018 brought the project’s total funding to $195 million.
In May 2018, Dfinity announced plans to distribute around $35 million worth of Dfinity tokens in an airdrop. It was part of the company’s plan to create a “Cloud 3.0.” Because of regulatory concerns, none of the tokens went to US residents.
DFINITY be broadening and strengthening the EVM ecosystem by giving applications a choice of platforms with different characteristics. However, if DFINITY succeeds in delivering a fully EVM-compatible smart contract platform with higher transaction throughput, faster confirmation times, and governance mechanisms that can resolve public disputes without causing community splits, then it will represent a clearly superior choice for deploying new applications and, as its network effects grow, an attractive place to bring existing ones. Of course the challenge for DFINITY will be to deliver on these promises while meeting the security demands of a public chain with significant value at risk.

1.1.1 DFINITY FUTURE

  • DFINITY aims to explore new blockchain territory related to the original goals of the Ethereum project and is sometimes considered “Ethereum’s crazy sister.”
  • DFINITY is developing blockchain-based infrastructure to support a new style of the internet (akin to Ethereum’s “World Computer”), one in which the internet itself will support software applications and data rather than various cloud hosting providers.
  • The project suggests this reinvented software platform can simplify the development of new software systems, reduce the human capital needed to maintain and secure data, and preserve user data privacy.
  • Dfinity aims to reduce the costs of cloud services by creating a decentralized “internet computer” which may launch in 2020
  • Dfinity claims transactions on its network are finalized in 3–5 seconds, compared to 1 hour for Bitcoin and 10 minutes for Ethereum.

1.1.2 DFINITY’S VISION

DFINITY’s vision is its new internet infrastructure can support a wide variety of end-user and enterprise applications. Social media, messaging, search, storage, and peer-to-peer Internet interactions are all examples of functionalities that DFINITY plans to host atop its public Web 3.0 cloud-like computing resource. In order to provide the transaction and data capacity necessary to support this ambitious vision, DFINITY features a unique consensus model (dubbed Threshold Relay) and algorithmic governance via its Blockchain Nervous System (BNS) — sometimes also referred to as the Network Nervous System or NNS.

1.2 DFINITY COMMUNITY

The DFINITY community brings people and organizations together to learn and collaborate on products that help steward the next-generation of internet software and services. The Internet Computer allows developers to take on the monopolization of the internet, and return the internet back to its free and open roots. We’re committed to connecting those who believe the same through our events, content, and discussions.

https://preview.redd.it/0zv64fzf05e51.png?width=637&format=png&auto=webp&s=e2b17365fae3c679a32431062d8e3c00a57673cf

1.3 DFINITY ROADMAP (TIMELINE) February 15, 2017

February 15, 2017
Ethereum based community seed round raises 4M Swiss francs (CHF)
The DFINITY Stiftung, a not-for-profit foundation entity based in Zug, Switzerland, raised the round. The foundation held $10M of assets as of April 2017.
February 8, 2018
Dfinity announces a $61M fundraising round led by Polychain Capital and Andreessen Horowitz
The round $61M round led by Polychain Capital and Andreessen Horowitz, along with an DFINITY Ecosystem Venture Fund which will be used to support projects developing on the DFINITY platform, and an Ethereum based raise in 2017 brings the total funding for the project over $100 million. This is the first cryptocurrency token that Andressen Horowitz has invested in, led by Chris Dixon.
August 2018
Dfinity raises a $102,000,000 venture round from Multicoin Capital, Village Global, Aspect Ventures, Andreessen Horowitz, Polychain Capital, Scalar Capital, Amino Capital and SV Angel.
January 23, 2020
Dfinity launches an open source platform aimed at the social networking giants

2.DFINITY TECHNOLOGY

Dfinity is building what it calls the internet computer, a decentralized technology spread across a network of independent data centers that allows software to run anywhere on the internet rather than in server farms that are increasingly controlled by large firms, such as Amazon Web Services or Google Cloud. This week Dfinity is releasing its software to third-party developers, who it hopes will start making the internet computer’s killer apps. It is planning a public release later this year.
At its core, the DFINITY consensus mechanism is a variation of the Proof of Stake (PoS) model, but offers an alternative to traditional Proof of Work (PoW) and delegated PoS (dPoS) networks. Threshold Relay intends to strike a balance between inefficiencies of decentralized PoW blockchains (generally characterized by slow block times) and the less robust game theory involved in vote delegation (as seen in dPoS blockchains). In DFINITY, a committee of “miners” is randomly selected to add a new block to the chain. An individual miner’s probability of being elected to the committee proposing and computing the next block (or blocks) is proportional to the number of dfinities the miner has staked on the network. Further, a “weight” is attributed to a DFINITY chain based on the ranks of the miners who propose blocks in the chain, and that weight is used to choose between competing chains (i.e. resolve chain forks).
A decentralized random beacon manages the random selection process of temporary block producers. This beacon is a Variable Random Function (VRF), which is a pseudo-random function that provides publicly verifiable proofs of its outputs’ correctness. A core component of the random beacon is the use of Boneh-Lynn-Shacham (BLS) signatures. By leveraging the BLS signature scheme, the DFINITY protocol ensures no actor in the network can determine the outcome of the next random assignment.
Dfinity is introducing a new standard, which it calls the internet computer protocol (ICP). These new rules let developers move software around the internet as well as data. All software needs computers to run on, but with ICP the computers could be anywhere. Instead of running on a dedicated server in Google Cloud, for example, the software would have no fixed physical address, moving between servers owned by independent data centers around the world. “Conceptually, it’s kind of running everywhere,” says Dfinity engineering manager Stanley Jones.
DFINITY also features a native programming language, called ActorScript (name may be subject to change), and a virtual machine for smart contract creation and execution. The new smart contract language is intended to simplify the management of application state for programmers via an orthogonal persistence environment (which means active programs are
not required to retrieve or save their state). All ActorScript contracts are eventually compiled down to WebAssembly instructions so the DFINITY virtual machine layer can execute the logic of applications running on the network. The advantage of using the WebAssembly standard is that all major browsers support it and a variety of programming languages can compile down to Wasm (not just ActorScript).
Dfinity is moving fast. Recently, Dfinity showed off a TikTok clone called CanCan. In January it demoed a LinkedIn-alike called LinkedUp. Neither app is being made public, but they make a convincing case that apps made for the internet computer can rival the real things.

2.1 DFINITY CORE APPLICATIONS

The DFINITY cloud has two core applications:
  1. Enabling the re-engineering of business: DFINITY ambitiously aims to facilitate the re-engineering of mass-market services (such as Web Search, Ridesharing Services, Messaging Services, Social Media, Supply Chain, etc) into open source businesses that leverage autonomous software and decentralised governance systems to operate and update themselves more efficiently.
  2. Enable the re-engineering of enterprise IT systems to reduce costs: DFINITY seeks to re-engineer enterprise IT systems to take advantage of the unique properties that blockchain computer networks provide.
At present, computation on blockchain-based computer networks is far more expensive than traditional, centralised solutions (Amazon Web Services, Microsoft Azure, Google Cloud Platform, etc). Despite increasing computational cost, DFINITY intends to lower net costs “by 90% or more” through reducing the human capital cost associated with sustaining and supporting these services.
Whilst conceptually similar to Ethereum, DFINITY employs original and new cryptography methods and protocols (crypto:3) at the network level, in concert with AI and network-fuelled systemic governance (Blockchain Nervous System — BNS) to facilitate Corporate adoption.
DFINITY recognises that different users value different properties and sees itself as more of a fully compatible extension of the Ethereum ecosystem rather than a competitor of the Ethereum network.
In the future, DFINITY hopes that much of their “new crypto might be used within the Ethereum network and are also working hard on shared technology components.”
As the DFINITY project develops over time, the DFINITY Stiftung foundation intends to steadily increase the BNS’ decision-making responsibilities over time, eventually resulting in the dissolution of its own involvement entirely, once the BNS is sufficiently sophisticated.
DFINITY consensus mechanism is a heavily optimized proof of stake (PoS) model. It places a strong emphasis on transaction finality through implementing a Threshold Relay technique in conjunction with the BLS signature scheme and a notarization method to address many of the problems associated with PoS consensus.

2.2 THRESHOLD RELAY

As a public cloud computing resource, DFINITY targets business applications by substantially reducing cloud computing costs for IT systems. They aim to achieve this with a highly scalable and powerful network with potentially unlimited capacity. The DFINITY platform is chalk full of innovative designs and features like their Blockchain Nervous System (BNS) for algorithmic governance.
One of the primary components of the platform is its novel Threshold Relay Consensus model from which randomness is produced, driving the other systems that the network depends on to operate effectively. The consensus system was first designed for a permissioned participation model but can be paired with any method of Sybil resistance for an open participation model.
“The Threshold Relay is the mechanism by which Dfinity randomly samples replicas into groups, sets the groups (committees) up for threshold operation, chooses the current committee, and relays from one committee to the next is called the threshold relay.”
Threshold Relay consists of four layers (As mentioned previously):
  1. Notary layer, which provides fast finality guarantees to clients and external observers and eliminates nothing-at-stake and selfish mining attacks, providing Sybil attack resistance.
  2. Blockchain layer that builds a blockchain from validated transactions via the Probabilistic Slot Protocol driven by the random beacon.
  3. Random beacon, which as previously covered, provides the source of randomness for all higher layers like the blockchain layer smart contract applications.
  4. Identity layer that provides a registry of all clients.

2.2.1 HOW DOES THRESHOLD RELAY WORK?

Threshold Relay produces an endogenous random beacon, and each new value defines random group(s) of clients that may independently try and form into a “threshold group”. The composition of each group is entirely random such that they can intersect and clients can be presented in multiple groups. In DFINITY, each group is comprised of 400 members. When a group is defined, the members attempt to set up a BLS threshold signature system using a distributed key generation protocol. If they are successful within some fixed number of blocks, they then register the public key (“identity”) created for their group on the global blockchain using a special transaction, such that it will become part of the set of active groups in a following “epoch”. The network begins at “genesis” with some number of predefined groups, one of which is nominated to create a signature on some default value. Such signatures are random values — if they were not then the group’s signatures on messages would be predictable and the threshold signature system insecure — and each random value produced thus is used to select a random successor group. This next group then signs the previous random value to produce a new random value and select another group, relaying between groups ad infinitum and producing a sequence of random values.
In a cryptographic threshold signature system a group can produce a signature on a message upon the cooperation of some minimum threshold of its members, which is set to 51% in the DFINITY network. To produce the threshold signature, group members sign the message
individually (here the preceding group’s threshold signature) creating individual “signature shares” that are then broadcast to other group members. The group threshold signature can be constructed upon combination of a sufficient threshold of signature shares. So for example, if the group size is 400, if the threshold is set at 201 any client that collects that many shares will be able to construct the group’s signature on the message. Other group members can validate each signature share, and any client using the group’s public key can validate the single group threshold signature produced by combining them. The magic of the BLS scheme is that it is “unique and deterministic” meaning that from whatever subset of group members the required number of signature shares are collected, the single threshold signature created is always the same and only a single correct value is possible.
Consequently, the sequence of random values produced is entirely deterministic and unmanipulable, and signatures generated by relaying between groups produces a Verifiable Random Function, or VRF. Although the sequence of random values is pre-determined given some set of participating groups, each new random value can only be produced upon the minimal agreement of a threshold of the current group. Conversely, in order for relaying to stall because a random number was not produced, the number of correct processes must be below the threshold. Thresholds are configured so that this is extremely unlikely. For example, if the group size is set to 400, and the threshold is 201, 200 or more of the processes must become faulty to prevent production. If there are 10,000 processes in the network, of which 3,000 are faulty, the probability this will occur is less than 10e-17.

2.3 DFINITY TOKEN

The DFINITY blockchain also supports a native token, called dfinities (DFN), which perform multiple roles within the network, including:
  1. Fuel for deploying and running smart contracts.
  2. Security deposits (i.e. staking) that enable participation in the BNS governance system.
  3. Security deposits that allow client software or private DFINITY cloud networks to connect to the public network.
Although dfinities will end up being assigned a value by the market, the DFINITY team does not intend for DFN to act as a currency. Instead, the project has envisioned PHI, a “next-generation” crypto-fiat scheme, to act as a stable medium of exchange within the DFINITY ecosystem.
Neuron operators can earn Dfinities by participating in network-wide votes, which could be concerning protocol upgrades, a new economic policy, etc. DFN rewards for participating in the governance system are proportional to the number of tokens staked inside a neuron.

2.4 SCALABILITY

DFINITY is constantly developing with a structure that separates consensus, validation, and storage into separate layers. The storage layer is divided into multiple strings, each of which is responsible for processing transactions that occur in the fragment state. The verification layer is responsible for combining hashes of all fragments in a Merkle-like structure that results in a global state fractionation that is stored in blocks in the top-level chain.

2.5 DFINITY CONSENSUS ALGORITHM

The single most important aspect of the user experience is certainly the time required before a transaction becomes final. This is not solved by a short block time alone — Dfinity’s team also had to reduce the number of confirmations required to a small constant. DFINITY moreover had to provide a provably secure proof-of-stake algorithm that scales to millions of active participants without compromising any bit on decentralization.
Dfinity soon realized that the key to scalability lay in having an unmanipulable source of randomness available. Hence they built a scalable decentralized random beacon, based on what they call the Threshold Relay technique, right into the foundation of the protocol. This strong foundation drives a scalable and fast consensus layer: On top of the beacon runs a blockchain which utilizes notarization by threshold groups to achieve near-instant finality. Details can be found in the overview paper that we are releasing today.
The roots of the DFINITY consensus mechanism date back to 2014 when thair Chief Scientist, Dominic Williams, started to look for more efficient ways to drive large consensus networks. Since then, much research has gone into the protocol and it took several iterations to reach its current design.
For any practical consensus system the difficulty lies in navigating the tight terrain that one is given between the boundaries imposed by theoretical impossibility-results and practical performance limitations.
The first key milestone was the novel Threshold Relay technique for decentralized, deterministic randomness, which is made possible by certain unique characteristics of the BLS signature system. The next breakthrough was the notarization technique, which allows DFINITY consensus to solve the traditional problems that come with proof-of-stake systems. Getting the security proofs sound was the final step before publication.
DFINITY consensus has made the proper trade-offs between the practical side (realistic threat models and security assumptions) and the theoretical side (provable security). Out came a flexible, tunable algorithm, which we expect will establish itself as the best performing proof-of-stake algorithm. In particular, having the built-in random beacon will prove to be indispensable when building out sharding and scalable validation techniques.

2.6 LINKEDUP

The startup has rather cheekily called this “an open version of LinkedIn,” the Microsoft-owned social network for professionals. Unlike LinkedIn, LinkedUp, which runs on any browser, is not owned or controlled by a corporate entity.
LinkedUp is built on Dfinity’s so-called Internet Computer, its name for the platform it is building to distribute the next generation of software and open internet services.
The software is hosted directly on the internet on a Switzerland-based independent data center, but in the concept of the Internet Computer, it could be hosted at your house or mine. The compute power to run the application LinkedUp, in this case — is coming not from Amazon AWS, Google Cloud or Microsoft Azure, but is instead based on the distributed architecture that Dfinity is building.
Specifically, Dfinity notes that when enterprises and developers run their web apps and enterprise systems on the Internet Computer, the content is decentralized across a minimum of four or a maximum of an unlimited number of nodes in Dfinity’s global network of independent data centers.
Dfinity is an open source for LinkedUp to developers for creating other types of open internet services on the architecture it has built.
“Open Social Network for Professional Profiles” suggests that on Dfinity model one can create “Open WhatsApp”, “Open eBay”, “Open Salesforce” or “Open Facebook”.
The tools include a Canister Software Developer Kit and a simple programming language called Motoko that is optimized for Dfinity’s Internet Computer.
“The Internet Computer is conceived as an alternative to the $3.8 trillion legacy IT stack, and empowers the next generation of developers to build a new breed of tamper-proof enterprise software systems and open internet services. We are democratizing software development,” Williams said. “The Bronze release of the Internet Computer provides developers and enterprises a glimpse into the infinite possibilities of building on the Internet Computer — which also reflects the strength of the Dfinity team we have built so far.”
Dfinity says its “Internet Computer Protocol” allows for a new type of software called autonomous software, which can guarantee permanent APIs that cannot be revoked. When all these open internet services (e.g. open versions of WhatsApp, Facebook, eBay, Salesforce, etc.) are combined with other open software and services it creates “mutual network effects” where everyone benefits.
On 1 November, DFINITY has released 13 new public versions of the SDK, to our second major milestone [at WEF Davos] of demoing a decentralized web app called LinkedUp on the Internet Computer. Subsequent milestones towards the public launch of the Internet Computer will involve:
  1. On boarding a global network of independent data centers.
  2. Fully tested economic system.
  3. Fully tested Network Nervous Systems for configuration and upgrades

2.7 WHAT IS MOTOKO?

Motoko is a new software language being developed by the DFINITY Foundation, with an accompanying SDK, that is designed to help the broadest possible audience of developers create reliable and maintainable websites, enterprise systems and internet services on the Internet Computer with ease. By developing the Motoko language, the DFINITY Foundation will ensure that a language that is highly optimized for the new environment is available. However, the Internet Computer can support any number of different software frameworks, and the DFINITY Foundation is also working on SDKs that support the Rust and C languages. Eventually, it is expected there will be many different SDKs that target the Internet Computer.
Full article
submitted by CoinEx_Institution to u/CoinEx_Institution [link] [comments]

STATERA

STATERA
…a smart contract deflationary token within a portfolio of selective coins/token built on the Ethereum blockchain.
INTRODUCTION
BLOCKCHAIN TECHNOLOGY IS CHANGING, DEFI IS EVOLVING! Blockchain technology is changing the World forever, over the past decade, the introduction of blockchain technology to the world and especially the world finance system has proven how convenient and secured the World's finance system can be. Blockchain has proven its' worth to be an essential tool in the world finance system. Varying from a different mode of transaction to the flexible usage of it.
The Ethereum blockchain has played and still playing a significant role in Blockchain finance evolution. With over a hundred projects out there, each aiming and claiming to solve the problem of financing system in the world, Decentralized Finance DeFi has proven to play an essential role in this regard.
Deflationary Projects arise amidst the flow of developers developing ways of making a project unique and scarce through constant reduction of total supply during a transaction. Over the past few years, many projects attempted this approach, however, they failed to achieve this goal as a result of the unsustainability of the project, or the utter intent of some of the developers to scam people of their money. Nevertheless, previous deflationary cryptocurrency attempts have proven the fact that this method alone will not make a project unique or scarce. Hence, a few projects attempted to apply other interesting features to their deflationary projects, some of which are discussed briefly below.
DEFALTIONARY PROJECTS
Through a constant reduction of total supply as a result of a certain percentage of each token transaction sent to 0x address, a deflationary project aims to reduce its total supply to make it scarce, thus increasing the demand and value of the project.

BOMB
The first project to start this on-chain action was BOMB token. With every transaction on BOMB, 1% of the supply is sent to 0xaddress and lost forever, this is known as BURN. Unfortunately, this didn't work for a long time as anticipated, turns out it takes a project more than just that to make it demanding. In light of this, a few other deflationary projects decided to add extra features to its deflationary attribute.

SHUF
A good example of this is another deflationary project called Shuffle Token (SHUF) burn 1% on every transaction, and randomly send another 1% to any of the top 512 holding addresses, this second feature is known as Heap.

BSOV
The third example of a deflationary project with an extra feature is BitcoinSov (BSoV). This deflationary project also has a 1% burn, but its extra feature is mining. This is the only mineable deflationary project in existence as of today.

RTK
The last example is Ruletka (RTK). Ruletka is an experimental ERC20 token, it was developed in the small town of Alatyr in Russia. When a transaction is made using RTK a number is chosen between 1 and 6. If 6 is chosen, the coins in the transaction will be sent to the 0x address and burned. It was developed based on the legend Russian Roulette life gambling game.
Regardless, deflationary projects are yet to receive the world's attention as most of them struggles to be sustainable.
INDEX FUND
For the past two years, Decentralized Finance or DeFi has been undergoing a series of changes and development. DeFi is aimed at providing solutions to the challenges being faced by traditional financial systems using Etherium blockchain as its primary station.
What is Index Fund?
Investopedia.com described index funds as a type of mutual fund with a portfolio constructed to match or track the components of a financial market index, providing broad market exposure with low operating expenses and low portfolio turnover.
Market Index
A hypothetical portfolio of investment holdings representing a segment of the financial market. The calculation of the index value comes from the prices of the underlying holdings. Investopedia. 2017 was a significant year for crypto, however, was succeeded by the extended bear market. The long bear market made it difficult for investors to select which market they are to invest in. since the rise of the crypto market, it has been experiencing a series of ups and downs, making it difficult, especially for retail investors to select the market they are to invest in. the idea of index market in the blockchain industry is something not usually mentioned, while most investors preferred to long or short on BTC or other Alts.
CRYPTOCURRENCY INDEX FUND
Late 2019 and early 2020, cryptocurrency index funds are becoming an item of discussion and interest in the cryptocurrency investment world. A cryptocurrency index fund prevents an investor from the stress of constant or active management of their crypto fund portfolio.
They help spread risks by diversifying an investment across a broad selection of coins, protected from the crypto market volatility. This means your fund is being handled for you, which of course necessitates a form fee. While fees vary from one index fund manager to the other, the differences in fees don't guarantee the performance of one manager over the other. Among the popular cryptocurrency index funds that are currently available on the market as of 2020 includes:

  1. Coinbase Index Fund
  2. BB Index
  3. Iconomi
  4. CBI Index 7 (CBIX7).
Note: this article does not prescribe an index fund or tokens for readers to invest in, rather make your research before investing.
INDEXED-DEFLATIONARY TOKEN
The idea behind an indexed deflationary token is to keep cutting the circulating supply on a transaction basis, coupled with an investment in diverse selective coins on the Ethereum blockchain market.
Using balancer, the idea is to have a deflationary token inside a pool with other assets for example; USDC, ENJ, LINK, KNC, etc. along with the deflationary token itself, all of which are in one pool. When the token is bought, it will automatically re-balance into other sets in the pool, increasing the trade volume and at the same time burning. A mix of index/portfolio token with a deflationary token.

STA
STATERA [STA]
Statera, in Latin, means Balance. STA is a smart contract deflationary token within a portfolio of selective coins/token built on the Ethereum blockchain. The index portfolio includes four Volatile markets and three Stable markets; ETH, MKR, SNX, LINK, DAI, SUSD, DZAR. Leaving STA with 70% volatility and 30% stability. STA is built on a smart contract, holding all the funds including STA itself.
When STA is purchased with ETH, ETH is spread into the weight in the portfolio and STA purchased gets removed from the index amount. The index suit is not fixed, i.e. the coins in the portfolio can be easily replaced should in case the market demand for a change.
The index also charges a fee. Every time a trade is executed through it (not a purchase of STA this time around, but swapping DAI for ETH for example), 1% stays behind and is shared between the portfolio. With the deflationary attribute of STA, the more trades that occur, the more valuable it becomes. The idea is that a price difference will lead to auto-balancing, which means burning. There you have it, An Index-Deflationary Token.
STATERA [STA] TOKEN INFORMATION
Total Supply: 100, 000, 000.
Burn rate: 1% on each transaction.
Blockchain: Ethereum.
Website: Coming Soon.
Community: Telegram Twitter
Contract address: Etherscan
Portfolio address: Zerion
Current exchange: 1inch Exchange
submitted by ghosthunter_01 to ethtrader [link] [comments]

Texas Cease And Desist For AWS Mining Pyramid Scheme AWS MINING - YouTube Mining XMR using ubuntu 16 from AWS LEAKED 1.9Gh/s Ethereum Mining FPGA Card (Ubimust Lure ... How to mine XMR on AWS AMAZON VPS for FREE.Mining using Free VPS.

Scroll down to continue with my original approach for mining Ethereum… How to start your AWS Mining instance. The steps are pretty simple: Go to your EC2 console in AWS and change the zone to US ... Ethereum miners will also have to do exactly the same things as what Bitcoin miners does; except for those puzzles which require not just computational power, but also memory as well. This reduces the competitive advantage of ASIC over GPU. Mining Ethereum on AWS. Let’s get started on how to mine some Ether on AWS! Setting up instance Is Ethereum mining on AWS profitable? Ethereum dual-mining profitability comparison (late June 2017). Keep in mind that as more miners join the network, and the Ethereum price fluctuates, so will your payout / return of investment. Ethereum is valued at 1 ETH = 706 USD. Mining on AWS EC2 is still, and will remain unprofitable – forever. Bitcoin inflow onto Ethereum hits USD 1.2B as Alameda Research Mints USD 25M Wrapped Bitcoin Block Ethereum Price: ETH/USD Eyes $400 as Miner Fees Hit New Highs Bitcoin price touches $10,950 as MicroStrategy confirms $425 million BTC haul Bitcoin inflow onto Ethereum hits USD 1.2B as Alameda Research Mints USD 25M Wrapped Bitcoin Block September 25, 2020, 11:28 am in Bitcoin , Bitcoin Cash , Ethereum , Litecoin , Markets , Ripple

[index] [1040] [6926] [7009] [6138] [2090] [3298] [2358] [3240] [1171] [1636]

Texas Cease And Desist For AWS Mining Pyramid Scheme

Infraestrutura com qualidade e segurança. Minere sem preocupação! www.awsmining.com Link1 aws.amazon.com Link2 https://lightsail.aws.amazon.com/ls/webapp/home Note: create free tier rdps Contact : https://www.facebook.com/mehry1472 Group: ht... Todays video was to Lure one of the Employees of Ubimust to post under my video, it was a success, video coming soon! As shown in the video, their hashrates ... GPU mining Ethereum quickly became my addiction, passion, and sole focus sucking into the world of cryptocurrency and Bitcoin. Let's review how mining change... Mining XMR Ubuntu 16 AWS amazon minergate. Mining XMR Ubuntu 16 AWS amazon minergate. Skip navigation ... JAPAN & USA COLLABORATION to BEAT China's E-Yuan! Ethereum to 10k minimum + SEC Securities ...

http://binary-optiontrade-saudarab.binancecryptocurrencyexchanges.trade