Small businesses are the backbone of our country and our economy. But their survival rates are pretty grim. We'd like to help change that.
Good business decisions require good data.
But, traditional BI dashboards take valuable time to setup, login, and navigate through to review pertinent sales data and trend reports.
Talking, even while we’re doing other things, is an innate human skill.
It is a hands-free activity that requires no context switch and takes little time or effort. Voice is our natural interface.
With SalesTalk by Skillsai, business owners can now discover how their business is doing, just by asking! And talk about an API Mashup! We are bringing together the excitement and benefits of the Internet of Things (IoT) with the efficiency of a Voice User Interface (VUI) to create an audible and visual Business Intelligence (BI) solution for business owners using WooCommmerce or Stripe to run their store. Additional platforms are being added as we speak. (Get it? See what we did there? ...grin...)
In the simplest terms – Sales reports that talk!
About Our TeamWith more than 60 years of business and technical experience between us, the three founders of Skillsai are passionate about the future of IoT, and software that utilizes a voice user interface - we believe it is the next big thing!
Utilizing the engine we've built (using Node.js, PHP, and a number of other libraries, we can integrate with every available API for the tools used most by small business owners. You can learn more about our team in the (admittedly hokey) video below.
SalesTalk High Level DesignSalesTalk SWIM modules perform the ETL functionality of extracting a business's sales data from their eCommerce store or payment gateway, transforming the data to an effective model for voice, and storing that in a data store on AWS.
Our SWS web server accesses that data on-demand whenever a user asks Alexa a question like "What were my sales yesterday?" Using Amazon's Voice Service (AVS) and the Alexa SDK, we can handle 1000's of Report Request variations using a small number of Intents (only 2!) and a small number of Custom Slots. Handling the multiple ways a user might ask a relevant sales question, however, does require nearly 2000 sample utterances in our model.
Voice User Interface DiagramThe flow below describes at a high level how the voice interaction works.
While the flow looks deceptively simple, parsing the Report Request is one of the most challenging pieces. A report request can have multiple permutations with combinations of various slots. Some example requests are:
ChooseReport {Report} {City}
ChooseReport {Report} {City} {Date}
ChooseReport {Report} at {City}
ChooseReport {Report} for {City}
ChooseReport {Report} in {City}
ChooseReport {Report} at {City} {Date}
ChooseReport {Report} for {City} {Date}
ChooseReport {Report} in {City} {Date}
ChooseReport {Prefix} {Report} {City}
ChooseReport {Prefix} {Report} {City} {Date}
ChooseReport {Prefix} {Report} at {City}
ChooseReport {Prefix} {Report} for {City}
ChooseReport {Prefix} {Report} in {City}
ChooseReport {Prefix} {Report} at {City} {Date}
ChooseReport {Prefix} {Report} for {City} {Date}
ChooseReport {Prefix} {Report} in {City} {Date}
{Report} currently includes Total Sales, Unit Sales, Best Selling, and Worst Selling. We will add new reports every couple weeks.
{Prefix} includes phrases like "Tell me the" and "What were my."
{City} uses a built-in slot from Amazon (AMAZON.CITY), but because there are numerous cities like Salem (located in OR, UT, and MA just to name a few states), the logic is more complicated. If sales for Salem were requested, we built in some simple "contextual awareness" by including a follow-up question as to the state the user had in mind. Then, that response is joined to the first query so that Alexa can return the correct result of the first request with the state clarified.
In addition, until the user requests a different Report Name, we "remember" the initial query and assume the next question refers to the parameters in the initial query, with the relevant clauses replaced. This allows the user to "drill down" for more details. An example would be if the user asked, "Give me the Total Sales for zip code 30303 in 2016." They could follow that up by saying "In June" or "last week" and SalesTalk handles the logic to return the correct data. Asking for a new report, such as Unit Sales, then clears the previous queries out of the queue and begins anew.
{Date} handling requires some of the most complicated logic as users can ask for a single date, a date range, or a "named" time frame like last week, last quarter, on Christmas, etc.
In addition, there are a number of other slots like {adjectives}, {state}, {reports}. {groups} and {actions}. We also handle requests to repeat the response, repeat the zip code, ask for help, and some common questions like what is the date or time right now which saves them needing to cancel out of the skill.
Skillsai Business ModelSalesTalk will be one of the first Amazon Alexa skills available by an independent development group that includes a viable monetization strategy - something every ecosystem needs to grow and remain viable! While users can subscribe to Skillsai at no cost and try out the audible (as well as a visual BI dashboard) using the sample WooCommerce store data we provide, they will need to select a report package and subscribe in order to connect their own data sources to the SalesTalk platform.
We're currently accepting qualified beta users, and plan to launch the product publicly in Q1 2017. If you or someone you know is using Stripe or WooCommerce in their business, and who would be interested in our solution, please have them contact us at info@skillsai.com. We excited about where this path takes us, and hope that you will follow us on our journey at https://www.facebook.com/skillsai/ or on Twitter @skills_ai.
Comments