web3: Creating a NFT contract

Wow…it’s been awhile!

A couple of weeks ago one of our clients approached us about helping them build an NFT (more on that later). In case you’re not “extremely online” and don’t know what web3 or NFTs are here’s a quick primer.

Crypto and NFTs

As crypto currencies go Bitcoin and Ethereum are the “OG” coins. They’re related projects but ultimately quite different. Ethereum differentiates itself because it enables the Ethereum Virtual Machine which is a global, distributed computing environment which uses Ethereum as payment for executing computation. Executing pieces of code, known as smart contracts, on the EVM is broadly referred to as “web3”. The web3 vision is that it should be possible to transition dozens of financial businesses processes onto the blockchain by using the EVM and smart contracts to encode the rules of the processes. Think stuff like insurance, stock issuance, and even sports books.

Non-fungible tokens (NFTs) are a specific type of smart contract which encode ownership of an asset onto the Ethereum blockchain. What makes NFTs special is that because of the decentralized nature of the blockchain and the EVM its possible to freely trade NFTs and encode rules into their smart contracts. OpenSea is the defacto NFT marketplace where users can trade tokens without the original creators having to create any additional infrastructure. It’s like StubHub…but anyone can sell any NFT on it and anyone can access it.

In addition, because the EVM is Turing complete its possible to enable extremely complex behaviors within the contract of an NFT. In theory, a NFT could represent ownership of any items from tickets to an event or digital collectables. But as it turns out, digital collectibles is where most of the action is today. See for example Bored Ape Yacht Club which has seen some tokens trade for upwards of $24m, Set of “Bored Ape” NFTs sells for $24.4 mln in Sotheby’s online auction

OK, now that we’re all caught up how does one create an NFT? There’s more or less 3 steps:

  1. Develop a smart contract in Solidity which implements the EIP-721: Non-Fungible Token Standard
  2. Write some HTML/JS to interact with web3 via MetaMask to call your contract
  3. Publish the contract to the Ethereum blockchain
  4. Mint your tokens via the HTML/JS from step 2

Sounds simple enough, but how do you actually make it happen?

Here’s a walk through to launch a NFT in your local test environment.

You can develop the Solidity code in any text editor. But there are some IDE options including an IntelliJ plugin and a larger list here, https://ethereum.org/en/developers/docs/ides/ It’s certainly possible to write a EIP721 Solidity contract from scratch but you’ll end up writing a lot of boilerplate code which will increase the surface area for bugs. A sensible alternative is to use the OpenZeppelin framework which provides you with a suite of battle tested, open source libraries to bootstrap your smart contract. Additionally, OpenZeppelin has a handful of working tutorials so that you can see a smart contract working end to end. Check out OpenSea Creatures.

After you have your contract the next piece is interacting with the blockchain to publish your contract. There’s a few tools here that all interact:

  1. MetaMask – MetaMask is a browser based crypto wallet and web3 provider. It allows you to store Ethereum and interact with contracts on the Ethereum blockchain. You’ll use MetaMask to ultimately mint a token.
  2. Ganache – Ganache is a tool which allows you to run an Ethereum blockchain on your local machine
  3. Truffle – Truffle is a suite of tools which makes it easier to interact with the blockchain. You’ll use Truffle to publish your contract and invoke methods within your contract.

Once you have all the tooling setup the steps you’ll need to take are:

  1. Setup MetaMask and note the mnemonic phrase which your keys were initialized with
  2. Launch ganache with that mnemonic so that your accounts have some Ethereum
  3. Use Truffle to publish your contract to your local ganache blockchain
  4. Use the HTML/JS integration you wrote to invoke MetaMask to call the .mint() function in your contract

Congratulations, you just minted your first NFT in test!

The process for deploying a NFT live is effectively the same except that you’d need to buy some real Ethereum and you’d point Truffle at the live network when you publish your contract.

Hope this was helpful and we’ll add more web3 related content as we continue to build solutions on it!

Spring Boot: Creating a filter to verify an API key header

Phew! Been awhile but we’re back!

NOTE: There’s a working Spring Boot application demonstrating this at https://github.com/Setfive/spring-demos

For many applications a security and authentication scheme centered around users makes sense since the focus of the application is logged in users taking some sort of action. Imagine a task tracking app, users “create tasks”, “complete tasks”, etc. For these use cases, Spring Boot’s Security system makes it easy to add application security which then provides a “User” model to the rest of the application. This allows your code to do things like “getUser()” in a Controller and have ready access to the currently authenticated user.

But what about applications that don’t have a user based model? Imagine something like an API which provides HTML to PDF conversions. There’s really no concept of “Users” but rather a need to authenticate that requests are coming from authorized partners via something like an API key. So from an application perspective you don’t really want to involve the user management system, there’s no passwords to verify, and obviously the simpler the better.

Turns out its very straightforward to accomplish this with a Spring managed Filter. Full code below:

The code is pretty straightforward but a couple of highlights are:

  • It’s a Spring Component so that you can inject the repository that you need to check the database to see if the key is valid
  • It’s setup to only activate on URLs which start with “/api” so your other routes wont need to include the Key header
  • If the key is missing or invalid it correctly returns a 401 HTTP response code

That’s about it! As always questions and comments welcome!

AWS Modern Application Development E-Book

Amazon Web Services recently published an E-Book on modern application development. In short, this guide explains the significance of digital transformation and how it can reinvent how your business delivers value. The main topics covered include: Digital Innovators, Characteristics of Modern Applications, Data Management & Computing in Modern Applications, and Security & Compliance. Below, I have summarized a few takeaways from each topic.

Digital Innovators

To be a digital innovator, you must work backwards to understand that innovation starts with your customers and listening to their wants and needs. AWS calls this process the “innovation flywheel.” The innovation flywheel consists of three steps: listen, experiment, iterate. After putting your customers first, it is essential to put technology at the center of your business. Some ways to do this are through digital marketplaces (two sided market that connects buyers and sellers,) direct-to-customer engagement, digital products as services, and insight services.

Characteristics of Modern Applications

Modern application development is a powerful approach to designing, building, and managing software in the cloud. Characteristics of Modern Applications align with digital innovation (see above.) Modern Applications require a culture of ownership, which also starts with the customers. To create this culture, companies should hire builders and support them with a belief system and let them build. It is important to trust in others skill sets and know where your boundaries lie. In terms of the architectural patterns of modern applications, most are micro-services. Micro-services have minimal function services, are deployed separately but interact together, each has its own datastore, is organized around business capabilities, the state is externalized, and provides a choice of technology for each micro-service.

Data Management & Computing in Modern Applications

Data management refers to purpose built databases that serve as decoupled data stores. Data management includes computing in modern applications. Computing with micro-services effect the way you package and run code, and compute in modern applications such as AWS Lambda. Release pipelines in AWS are standardized and automated. This means that they are no longer manual, there is continuous integration and continuous delivery. Also, there is a server-less operational model. These models are ideal for high-growth companies that want to innovate quickly because they don’t require server management, they provide flexible scaling, you pay for the value you need, and they automate high availability.

Security & Compliance

Security configuration and automation are needed. To ensure security and compliance, these practices are incorporated within the tooling. Some of this tooling includes code repositories, build-management programs, and deployment tools. Security and compliance are also applied to the release pipeline itself and the software being released through the pipeline. Lastly, DevOps and DevSecOps safeguard security and compliance. AWS defines DevOps as, “the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity.” Similarly, DevSecOps is described as “philosophy of integrating security practices within the DevOps process. DevSecOps involves creating a “Security as Code” culture with ongoing, flexible collaboration between release engineers and security teams.”

I hope that you found these summary useful. We will continue to try to summarize AWS content so that you don’t have to read it or navigate demo vids / webinars. Like what you read? Check out our blog post on why AWS is so cool: https://shout.setfive.com/2019/07/11/what-makes-the-aws-cloud-so-cool/.

QA: An afternoon with the Rainforest QA test builder

Even in 2019, software testing is still a challenge for a lot of small companies. Testing is usually not prioritized amongst small teams. Small teams frequently lack a dedicated QA resource, this causes writing good tests to be a unique skill in itself. Because of this, teams will end up with maturing software products that have few or no tests. As development continues, the downside is that there is an increased potential for bugs to enter the product. So, how can small teams tackle this challenge? As the software development industry has evolved, the industry has developed a wide array of quality assurance (QA) tools and techniques. Broadly, these tools can be categorized into two buckets – manual (human) testing and automated testing.

Manual / human testing is essentially exactly what it sounds like. A human QA engineer manually executes a list of steps, evaluates the results, and decides if the tested software is passing. Manual testing is relatively easy to start because non-technical resources can develop and execute the tests. However, as the test suite grows, teams run into issues because they’re limited by how many QA resources they have. This leads to teams only running tests before certain deployments, causing them to miss bugs.

In contrast to manual testing, automated testing is typically entirely code based. A QA engineer writes tests in a general purpose programming language. This asserts that the tested software is still working as anticipated. Since the tests are executed by a computer, this approach does not suffer from the limitation highlighted above. The trade off is that, because the tests are written in a programming language, technical resources are required to develop the tests.

So, what if there was a hybrid approach that combined some aspects of each approach? Well, that’s why you’re here! Say hello to RainforestQA.

What is RainforestQA?

RainforestQA is a SaaS product that incorporates manual and automated software testing approaches. RainforestQA offers a free trial, followed by a pay-as-you-go billing model, where you only pay for the resources being used. RainforestQA tests are executed by an automated system, or human testers, depending on how the test is constructed.

What does an automated RainforestQA test look like?

The tests are composed of a series of steps, which describe actions or assertions, that the automated system must take. The steps can include, “load this page” or “scroll the page down,” while the assertions are things like, “see a button” or “confirm text on the page.” When any one of these steps / assertions fail, the entire test fails, which indicates that something is broken in the software being tested.

What’s the process of building these automated tests?

Building automated tests on the Rainforest QA differs depending whether it is written in Plain English or Beta Language. When constructed in English, the user is constructed to write their own question and answer for each step. If the test is written in Beta Language, an action can be selected from the sidebar on the right, followed by a target, which is also listed on the same sidebar. The types of actions and targets can be adjusted depending on the test and what is being assessed.

When composing certain tests in Beta, you will find that the same sequence of steps are needed. Instead of writing out every individual step, over and over, the “custom actions” feature can be used. This feature enables a series of steps to be grouped together, which saves a lot of time and energy. I found the custom actions feature exceptionally useful when a login was required at the beginning of the test. However, a flaw in this feature can appear if the actual custom actions, itself, is being tested. The test results for a custom action will not appear unless the results page is reloaded. While this is a very small detail, it was a fairly substantial inconvenience for me. The rest results appeared as though the custom action test was in progress for over an hour, when in actuality, the test results were returned within a few minutes, they just did not appear until the page was refreshed.

How does the back-and-forth work between users and Rainforest QA engineers?

When running a test, everything is sent through a real and active test team of almost 60,000 testers. The test team provides clear feedback in a timely manner. If the test is passed, it will appear in green (as pictured above.) If the test is failed, it will come back in red. If the “Go To Test” button is selected, the test feedback can be viewed. Specific comments and critique are given on the particular step that caused the test to fail. Additionally, all of the tests and results are automatically recorded and stored in a neat and orderly fashion.


What is the Difference Between Testing Languages?

As discussed, on the Rainforest QA, tests can be written in “Plain English” or “Beta Language.” Writing tests in plain english is faster and easier, but also much more expensive. For a test to be passed in “Plain English,” the tests have to be written and constructed in a very specific way. For example, if you wanted to test the login page while leaving the username or password blank, you cannot use the “type” action to exemplify that you are leaving it empty. With the Beta Language tests, you have to select a specific action from the bar on the right, followed by a target. The only choices are what is already listed. In Beta, you have the option to use custom actions, you can also make new targets, but only by labeling a pre-existing type of target. When conducting a test in beta language, screenshots are used to identify what should be seen/clicked on each page. The downside being, if there are three of the same buttons on one page, you cannot type in directions, nor can you describe which of the three identical buttons needs to be selected.


Conclusion

I haven’t plugged Rainforest into our development workflow, so I cannot speak on the integrations or reporting. However, I would recommend the Rainforest QA to anyone- regardless of their technical ability- that wants to run automated tests on a timely and inexpensive budget. Building tests on this QA very quick and straightforward. While you may find a few complications and specificities on each language, it typically would not take more than one revision to fix the issue.

TL ; DR

Likes:

  • Interface is easy to use
  • Variety of features available to test
  • Access to a test team that provides feedback quickly
  • Non-technical users can build tests and test the UI without writing code
  • Free trial and then pay as you go pricing
  • Custom actions

Dislikes:

  • Tests written in Plain English language ask for specific answers on tests which allows a huge margin for error including spelling, spacing, and plurals
  • Have to be written and constructed in a very specific way to be passed and you can’t use any screenshots to clarify directions
  • Screenshot feature for capturing targets do not always capture / appear
  • Cannot specify instructions on Beta language

Open Data: Which Greater Boston Regions Have Open Data Sets?

Open Data is defined as: “data that can be freely-used, shared, and built-on by anyone, anywhere, for any purpose” (Open Knowledge Foundation Blog.) Open data provides many benefits.

In a similar manner that it is essential to record a nation’s history, recording open data has comparable advantages. Keeping a running log of statistics and information can be used to analyze changes in patterns and sequences. With a measurable starting point, as well as updates, each community can stay informed and up to date about their surroundings. It is useful for the affected society not only to be aware of the changes in their government’s policies and implementations, but also the consequences. With mandatory government submissions and access to open data, local businesses have the ability to develop custom business plans tailored to their company’s surroundings.

Open data often includes demographic statistics in addition to employment information, salary, income, and spending. With open access, local engagement is welcomed and encouraged. Also, there is room for the public sector to make digital and technical transformations, implementing social progression and efficiency. Through this evolution, statistics on unemployment high school dropout rates as well as crime and violence can be targeted and countered.

To insure political justice, reporting open data is mandatory. This is essential for two reasons, it prevents the government from concealing certain statistics and information, and it is not gathered for a specific purpose. What this means is that the options for interpretation, analysis, and creativity are unlimited. People can use this data to make assessments and conclusions that the government may not have wanted to publicize. Additionally, this data can be used to measure and reinforce financial and economic status. From a technical standpoint, open data is very useful and endless in its opportunity for building.

Some examples of projects that have been produced with open data include: a school selection device, a flood print, online voting at events, home health and safety report, traffic and accident browser, damage from disasters assessment, a mobile voting ballot, etc.. The chart below provides the Greater Boston regions that have open data readily available. With this data, endless projects and tools could be designed, so, what will you build?