facebook-white twitter-white link-in-white link-in-white

blog

CTO Note: methinks’ improving video technology this year

  • Wilson Li

I don’t think we’ve shared the story of our first iteration of methinks. My co-founder Philip Yun recounted our founding in Venturebeat last month. But, there’s a Lean Startup story in our founding that helps people understand how we got to market and created customer momentum so quickly. I’ll share that story here as context for the news about our big video techs upgrade arriving later this year. 


The methinks culture is about solving for big problems in remote research. For a long time, app developers have been limited to simple screen sharing capability where users could only share their mobile screens while having the app open. This has been the industry standard, because all of the startups racing to create the best qualitative video research platform are using the same or similar video technology vendors. I can’t speak for other companies, but when we started to build our prototype of methinks, we used “off the shelf” solutions and then migrated to best-of-breed solutions. And, when we started to get good feedback, we literally followed the Lean Development handbook and made Minimal Viable Products that we were nearly embarrassed about, so that we could get more feedback from our early customers in games and media.


All of that worked well. But, as we attracted larger customers in auto, consumer electronics, banking, the engineering overhead of working with standard solutions created limitations. And, worse, many off-of-the-shelf standards in video require that we propagate the limitations of the underlying video technologies. That didn’t feel right.


As I noted above, in most cases, it’s impossible to share the mobile device's screen when the underlying app is closed, and as a result, users are limited to sharing content within the app. This limitation creates friction in the research interviews -- it puts screen capture requirements on the research subject, and that’s just awkward. Moreover, watching a livestream in real-time with sub-second latency as a researcher on the other end of the video chat requires Adobe Flash, which is going to be obsolete in 2020. Conventional livestream technologies can eliminate the requirement of Flash, but they always come with a huge cost - latency.




methinks lets you interview consumers using the best video chat technology available, with screen sharing, recording, video file-sharing and annotation 

so that you can capture key consumer insights impacting your whole company. methinks is the best qualitative research solution available.

As soon as we received funding this year, we started building video research capabilities that are super easy to use, even for the non-tech savvy. Traditional video chat and screen sharing solutions typically require a certain level of technical expertise. Relying on a third-party vendor has prevented us from further improving the user experience of the methinks platform and open methinks to all age groups with any level of technical expertise. We’re happy to say that this period of development is behind us.


So, to restate the problem: How to let methinks Thinkers share their mobile device's screen with just a single tap, regardless of what app they open, wirelessly, and at the same time engage with the researcher in a real-time video chat with only milliseconds delay? 


Solution: We developed our own video chat backend. The new backend is built on the Web Real-Time Communication protocol (WebRTC). With our improved new video chat backend, we dramatically reduce the technical requirements of conducting and participating in remote user research, which in return opens the door of remote qualitative user research to a much broader audience, with greater insights to be learned via better, less-clunky screen sharing.


Another benefit of building our own video chat backend, our apps can easily enable ultra low latency livestream, allowing researchers to observe users' unmoderated usage of their products, with live intercept features, anywhere with internet access. This capability and low-latency is critical to having smooth conversations where everyone benefits from a back-and-forth conversation that is natural, clear without everyone talking over each other or waiting for the on-screen visuals to catch up with the conversation. 


We’re almost done testing the first deployment of this new backend, and we expect to deploy widely for all of our customers at the end of the year.



#videolivestreaming #videointerviews #WebRTC #realtimecommunication #customerresearch #livescreenshare #videochat