Minimum Viable Product (post #10)

Before getting into how we began our minimum viable product, let’s go over some fundamentals.

Minimum Viable Product (MVP): a version of a new product that allows a team to collect the maximum amount of validated learning about customers with the least effort.

In an MVP experiment, you build the smallest version of a product, with the least amount of resources, in order to get real feedback and find out if your idea will work.

Validated learning: learning from a scientific experiment where you are not biasing your customers.

Fail fast, run as many experiments as you can in order to collect more data and find the best version for your product.

Seven Steps of an MVP Experiment

Figuring out what your problem/solution set is

Identifying your assumptions and ranking them

Building testable hypotheses around your assumptions

Establishing your minimum criteria for success (MCS)

Picking what type of MVP you are going to run and your strategy

Executing the MVP experiment

Evaluating and learning from the experiment

Figuring out what your problem/solution set is:

The problem was that the user was studying for a standardized exam. This often meant a professional licensing exam. This person was often a young millennial (in their 20’s). They worked full time. Studying had to happen in their free time. This essentially meant that they no longer had any free time. It was not always easy to block out a couple of hours after work to study. Classes and tutors were pretty expensive and hard to afford.

Our first exam was going to be the ASVAB, whose students had a few unique characteristics. They were male (85%), they were bad at math, and they were at least using their phones as much as the average young millennial. Maybe more. The average millennial spends 5.7 hours per day on their phone https://www.zdnet.com/article/americans-spend-far-more-time-on-their-smartphones-than-they-think/.

Our solution set to this point was a product (app) that was mobile, focused on micro-learning and mass customization, and also had some engaging gamification features.

We had also come across another educational concept that we wanted to consider. It was called mastery based learning (MBL). MBL basically just said that once you had “mastered” a topic, then it was time to move on to the next topic. This was different than your traditional book or classroom based learning. In your typical class, it’s this week we are learning about polynomials and next week we are doing trigonometry. For one person, they were ready to move on to trigonometry on Tuesday. For the next person, they still were not ready to move on and it was Friday. Mastery Based Learning said that once you have mastered a topic, then let’s move on. It seemed that it would be hard to implement MBL in a classroom, but fairly easy in a digital product like ours.

Our version of micro-learning was going to be fairly basic. A round would be ten questions. There would be feedback after each question. There would be topics that user’s could pick from. These topics would have to be very specific. It could not be 150 Algebra questions, with Algebra being the topic. It needed to be that there were specific topics for order of operations, factorials, and quadratic equations, for example. The user would be trying to “master” that topic. We decided that 80% correct and having answered at least ten questions would qualify as mastery of that topic. Deciding on the 80% was somewhat arbitrary, but the ASVAB met the criteria of “an inch thick and a mile wide”, meaning that there were a lot of topics but the individual questions were not all that detailed. We had an ASVAB tutor confirm that 80% seemed about right.

After the ten question round was over, there would be a screen that we called Round Review. This would tell you how many points you accumulated (each correct answer was worth 100 points), what your percentage correct was, and how long it took your to answer the questions. Below that would be a review of all the questions and their correct answers.

The customization part was going to have to be extremely limited for the MVP. However, in the round review we could have recommended videos. These would just be short videos that were connected to specific questions. Maybe there would be two or three videos per topic. In the round review, you would see videos that described how to solve certain problems based on how you answered. This idea seemed to fit especially well with math problems, which was a key area of concern for ASVAB students. So the round review would look something like this:


IMG-2053.JPG


For gamification, there would be challenges, badges, and mastery. The challenges would be measuring things like time spent answering questions, number of questions answered, questions answered correctly, number of rounds completed, perfect rounds, and topics mastered. Any of the challenges that were completed would be handed via a visual award at the end of the round and before the round review. There would also be a Challenges screen that the user could look at whenever they wanted. This is what that would look like



IMG-2056.JPG


For the badges, the idea was that after you accumulated a certain number of points, then you received a new badge. There was a screen similar to the Challenges screen that would show you the badges.




IMG-2057.JPG


And we also thought that there needed to be some way for the user to be able to see their progress (how close were they to their next badge). So we put a progress bar on top of the question screen. We also put in a 50/50 button that you could use once per round. It just eliminated two of the answers. I think we stole it from that game Who Wants to be a Millionaire that used to be on television. The 50/50 button probably did not meet the criteria of an MVP, but it wouldn’t be hard to throw in there and we thought it would let people wrestle with a few of the harder questions rather than just guessing. In the footer (bottom of the screen), you could see the badge name and icon. On the left was the home button, which is that thing that looks like a clock.


IMG-2054.JPG

The notifications that the user received at the end of the round for hitting a challenge, mastering a topic or getting a new badge looked something like this

IMG-3746.PNG


And we needed some type of homepage. We came up with the quadrant below. Topics involved the user picking a topic and starting a round. But we would also show all the videos that we had in the video quadrant, we would track missed questions and give info about the exam. We actually left the missed questions and info quadrant as dead screens for the MVP because we just wanted to get feedback on what people would think of the idea.


IMG-4396.PNG

That was pretty much going to be everything for the MVP. Our exciting ideas regarding GPS (send people notifications when they enter a preferred location) or hands free (study while driving in your car or walking to the store) would have to wait. We didn’t even give users the ability to pick their favorite times to receive a notification. We picked out ten topics from the ASVAB and had about 200 questions with feedback. We had 25 videos that we made, which were just me doing my best high school math teacher impersonation in front of a white board. The videos were recorded with my iPhone and uploaded to YouTube (it’s easier to integrate YouTube videos into an app).

We thought what we had in the MVP focused on micro-learning. You could do a round in only a few minutes. There was a certain level of customization through the videos that other apps and courses did not offer. The customization focused on math because that’s what the ASVAB students needed help with the most. And there was gamification that increased the feel of interaction with the app.

Identifying Our Riskiest Assumption

Riskiest assumption:if it is not true, your product will definitely fail. Usually, the riskiest thing to assume is the fact that your customers have the specific problem that you’re trying to solve. If they don’t have that problem, they won’t care about your product.

Below is a list of our assumptions:

·People will like this product

·People will pay for this product

·People will like gamification. It was possible that gamification was good for fitness apps, but not as much for exam prep

·The app will work. Meaning it will actually improve scores and help people study

·The videos will be useful. It is possible that someone watches the video and is still confused. Then what is supposed to happen?

·YouTube referral marketing will work, meaning that a video post from one of these YouTube channels (who do have hundreds of thousands of views for their how to pass the ASVAB video) talking about our app will actually get people to go to our app store page

·Users want to/are willing to study while waiting in line or taking the bus to work (they like micro-learning)

·We can build this MVP product

·People actually have to study for the ASVAB

We really already knew that people needed to study for the ASVAB. Our earlier research had shown us this. So we crossed that one off the list.

We were not sure that YouTube referral marketing would work. But we would need to have a full product on the App Store for that. So that assumption would need to be tested later.

The recommended videos almost surely would be useful to some and for others they would still be confused. It was an improvement on anything else that was out there in that they were videos that you could watch and they were based on how you answered questions. YouTube and Kahn didn’t offer this. They just had videos. This was not the riskiest assumption, although how to improve on this was something that we were already thinking about.

We now had a software architect named Joseph. Rob (our advisor who was the former VP of Technology at McGraw-Hill) had worked with him on edtech projects in the past. He had a master’s degree in computer science from MIT. We were confident that we could build the MVP.

This meant that our list of the riskiest assumptions had been shrunk to

·Users will like this

·Users will pay for this

·Users will like gamification

·the app will work (improve scores)

·Users will like micro-learning

After some deliberation, it seemed that the riskiest assumptions were that the app would improve scores and whether users liked micro-learning. It seemed to us that if the app proved to increase scores and people liked our style of micro-learning, then there would be a pretty good likelihood that the user would like the app. There were plenty of people who were paying for the end result (a certain score on the ASVAB) through books, tutors and other digital products. If the app could be proven to get the same or better results with less effort, then we thought they would pay. Gamification seemed more like a design question and not as significant as improving scores and micro-learning.

Improving scores was going to be harder to measure for an MVP product. To measure whether or not the app improved scores, there would have to be some kind of practice test that the user would have to take beforehand and we would need to have all of the content necessary to study, which would mean probably 1500 questions and at least 50 or 60 videos. This would take up more time and money.

On the other hand, learning about users thoughts on micro-learning could be much more immediate. But we kept coming back to the idea that we needed to be measuring whether users were learning. Even if it wasn’t perfect. We had to be measuring whether the MVP was helping in the learning process. So we stayed with the concept that we would have ten topics, 200 questions, and probably somewhere around 25 short videos for our MVP app. There would be a mini ASVAB test just based on these ten topics that the user could take before they started using the app. It would just be 20 questions. After ten days we would ask them to take another test with similar but slightly different questions and see if there was a difference in their scores.

Hypothesis

The difference between assumptions and hypotheses is that hypotheses are actionable, they have a target group, an expected outcome, and a strategy to get customers to act in a certain way.

Basic Hypothesis:we believe (target group of people) will (predicted action) because (reason).

Our Basic Hypothesis: we believe ASVAB students will benefit from studying through micro-learning in our mobile app because it will lessen the amount of studying required during their free time while also customizing to them through our recommended videos.

Specific Hypothesis: the most specific way of thinking about a hypothesis is we believe (subject) has a (problem) because (reason), if we (action), this (metric) will improve.

Our Specific Hypothesis: we believe that struggling ASVAB students need to improve their math skills through numerous repeated customized interactions with the content. If we offer customized micro-learning (learn something in 5 or 10 minutes) through a mobile app, their scores will improve.

In my next post I will go over our MCS (minimum criteria for success) as well as how we executed and evaluated our MVP experiment.

Previous
Previous

Minimum Criteria for Success (post #11)

Next
Next

User Personas (post #9)