DevOps Chat: Autonomous Test Innovation Using AI/ML with Functionize – DevOps.com

At the speed of DevOps, automated testing is essential for QA to maintain pace with that of software creation. Automated testing is ripe for innovation, too. How do we know we are performing the most relevant tests, that new functions in the software aren’t being missed or overlooked, or that highly dynamic applications aren’t outpacing static test cases and logic?

Functionize aims to bring innovation to achieve autonomous testing, making testing, the creation and maintenance of tests more efficient. Founder and CEO Tamas Cser joins us on this episode of DevOps Chat to share several innovations his company brings to DevOps teams.

Join us as we discuss how writing test cases in English help capture intent and result in less brittle tests over time, and about the ability to reach into Functionize during run time and programmatically change object models and internals. Learn how AI improves visual detection of web pages and changes in web page behavior, and how AI can root out test cases that may no longer be valid as software functionality changes.

As usual, the streaming audio is immediately below, followed by the transcript of our conversation.

Mitch Ashley: Hi, everyone. This is Mitch Ashley with DevOps.com, and you’re listening to another DevOps Chat podcast. Today, I’m joined by Tamas Cser, who is Founder and CEO of Functionize. Our topic today is a very interesting one—AI machine learning for automated DevOps testing. Tamas, welcome to the DevOps Chat podcast.

Tamas Cser: Hey, Mitchell, it’s great to be here. Thanks for having me.

Ashley: Well, absolutely. It’s my pleasure. Thank you for joining us. Would you start out by introducing yourself, tell us a little bit about you and also about Functionize?

Cser: Absolutely. So, my name is Tamas Cser, and I’m the founding CEO of Functionize. I’ve been in technology for about 15 years, started as a consultant running my own company here in the Bay Area focused around development and we did, also, a lot of DevOps and building infrastructures kind of initially just saw posted and then cloud hybrid.

And during those years, I bumped into testing a lot of times and really being at the front of the house and running the company and dealing with customers. It really become very apparent that testing is lacking in a lot of areas, and as DevOps got more and more initially mature and got traction, then we got into it more and, again, testing is a huge problem.

So, really, that’s when I got into testing, tried to look at the problem, understand what opportunities we have now with cloud and big data that have not been explored and how that could be applied to solve some of these problems and this is really how Functionize was born about four years ago.

Ashley: Mm-hmm.

Cser: And ever since, it’s been an amazing ride. And now, you know, four years in, we have really acquired a great team and we have a very exciting product and customers and just, it’s been an incredible journey.

Ashley: Great. Well, I definitely wanna hear more about the product and et cetera. Let’s start maybe with, in a software deployment world, especially in a DevOps world where we’re automating and we’re shifting things left, testing is only as good as the actual testing that happens. Bad testing automated is still bad testing.

What are some of the hindrances that you see as we’ve changed to more of a DevOps style container, cloud? What kinds of things has that introduced into the testing criteria and how do you approach solving that differently than maybe a traditional, like with a Selenium style tool or something like that?

Cser: I think that the largest change that comes with all of the incredible progress that we have made in software development, if you think about the entire DevOps toolchain and cloud and a lot of the automation that happens including containers is speed.

And so, as with speed, software gets deployed much more frequently, and companies are able to iterate very, very fast around their product. And that provides a huge challenge for regression testing as well as getting testing on any of the new features that you’re releasing, and the maintenance especially around it is a huge challenge.

So, I would say that, really, that speed that we have seen recently with some of these newer methodologies and then agile, multiple releases per day is really putting a huge pressure on the quality departments. And so, there’s huge opportunity to improve around that and really bring that up to speed, if you will, versus QA that has been kind of sitting stagnant for a long time, not seeing a lot of innovation if you look back the past 15 to 20 years.

Ashley: Well, you know, it stands to reason, if you’re producing a lot more software much quicker, you need to test, obviously, much more quickly. So, automation, of course, is very important.

What do you also think about the problem of how do you know to test the right things when you’ve got the right kind of coverage, and keeping in mind, you know, you may have developers, DevOps, engineers as well as QA people that are defining what those test criteria are?

Cser: So, I think coverage is a very interesting topic and it’s an important topic that a lot of our customers have, how much coverage should I have, do I have the right coverage? And this is also an area where we can collect data and understand and analyze data from the application itself and even from the live user usage, in order to see where the patterns are, what’s being used, what potential impact it might have on the software if a particular feature or a functionality would break.

And so, again, I think there’s a huge opportunity in bringing that—this is certainly one area that we’re focused on with our autonomous next generation capability where we’re trying to close the gap on that and have an impact on companies’ abilities to analyze and understand what, really, they need to test.

Ashley: I believe, if I recall right, with your technology, your product, you actually write the tests in kind of an English format, right, rather than code? Is that correct?

Cser: So, yes, we do have an English version of the desk creation capability. That’s not the only one, but it’s one of the key areas we’re innovating around is to understand user intent and what the test case needs to do and be able to take modeling around that, and that becomes really interesting and really important in how your test cases are maintained over time.

So, these test cases become a lot less brittle for the very specific implementation and particular button or HTML element interaction. And so, that’s a really exciting area that we’re innovating in.

Ashley: Excellent. So, I would imagine also, being part of a CI/CD process, you have to integrate with a lot of different tools or at least fit into a framework where, you know, you obviously don’t own the process end to end. Tell us a little bit about how you’ve approached that.

Cser: So, the way we approach that—and again, this is a super important part of DevOps and building a piece of technology, these days especially, because the toolchain is incredibly important. And so, the way we approach it is two ways: One, we have generic APIs and give out of the box solutions to integrate with kind of your usual suspects—GitHub and Jira—

Ashley: And Jenkins.

Cser: And various different CI tools like Jenkins.

Ashley: Mm-hmm, and Agira, yeah.

Cser: The other piece that we’re doing that’s really interesting is, we’re—we have an ecosystem within Functionize that allows our users, basically, to program our run time environment that actually executes the tests.

Ashley: Mm-hmm.

Cser: And this opens up all of the object models and kind of hidden internals of the actual test case running. This is really, really interesting, because it gives you the capability to do a really, really deep level integration even in the middle of an execution with third-party applications or any other external functionality that you might have to perform that’s kind of a complex and advanced use case.

Ashley: Now, am I correct in reading into what you’re saying in that while the tests are running, you can programmatically control Functionize and alter or adjust its behavior of what it’s testing or how it’s testing or something related to that?

Cser: Precisely, precisely. So, this is kind of a first in the market capability and beyond just controlling sort of the outcome of the test case or the behavior, you also get access to the rich data sets that we’re collecting, including all of the visual screenshots or videos that we’re collecting doing execution.

Ashley: Mm-hmm. And I understand, too, you use some AI machine learning in some pretty unique ways with screen captures, maybe other ways in the product. Talk a little bit about that.

Cser: Absolutely, I’ll be happy to. So, as applications run again and again, obviously, how the application behaves and looks is really important. So, there’s one area that customers care about just around visual testing, which is, “Is my application displaying the way that I’m expecting it to and customers would expect it to?” So, it’s one key area that we’re working on and, with template recognition capabilities that can detect breakages on the page and this is based not on traditional kind of, you know, pixel by pixel computation, looking at the difference, but actual, deep learning that can understand how your page normally looks and where we normally see changes versus maybe change sets that looks like anomaly.

Ashley: Mm-hmm.

Cser: And then there’s another area—also, this is really interesting. Part of this that can be used in the root cause analysis, for example, of a page to understand what changed, what new elements were introduced and potentially have a test case that now is no longer valid could be automatically updated.

Ashley: Oh, so, certain scenarios, maybe a page has changed sufficiently that the test no longer makes sense any more to run, that kind of a situation?

Cser: Exactly. You mentioned that somebody added a couple of new form fields to a form and now those are required and you can’t complete the test. So, that’s something that we can do with analysis, visual analysis, as well as some DOM analysis can recognize an automatic usage and some solutions around it.

Ashley: Wow, we’ve come a long way from the screen scraping vector graphics days of testing UIs, haven’t we? [Laughter]

Cser: Oh, definitely. [Laughter] There’s a lot of exciting work that’s gone into it. And again, we are obviously drawing down on incredible open source capabilities as well and the advances that we’ve seen over the last year in an LP as well, so it’s really exciting progress that’s being made in the industry in general.

Ashley: Excellent. So, talk a little bit about the cloud environment and, you know, we’re living in a world of, you may be a cloud-native application, but you may also be in a hybrid private cloud, maybe even not even a cloud data center, you know, private data centers and also multi-cloud. What are some of the challenges those environments bring and how might you tackle that?

Cser: So, that’s a great question. Cloud environments in general are wonderful to work with for us and customers that are in the cloud and utilizing the cloud are good to work with for us. It does not represent any kind of a major challenge. It actually makes it much easier for us to work together.

I would say that the cloud hybrid environments or on-prem environments certainly pose a challenge and there’s various different ways that we work with customers like that to still be able to bring the value of the cloud and the scale and kind of a lot of the modeling that we do and be able to still test applications that are hidden behind a firewall.

Ashley: Mm-hmm. Is that also because of just the environments, maybe the testing tools, et cetera, that are in a traditional data center, private data center environment just are unique to that environment versus a cloud environment is more standardized if you’re in Azure or Google or AWS—is that why?

Cser: I would say it’s more, it has to do with the access. It’s the way that we can access the application. It’s the client’s ability to provision environments that might be dedicated for testing, the way those environments can get the right resources, compute resources so it can handle the load that we will put on it as you go into agile testing. Versus in a traditional environment where the hardware is limited and potentially, you know, you have to deal with some secure tunneling into the system which is gonna slow it down. And so, overall, your speed decreases and the flexibility around those environments decrease. And the customer’s ability to provision new hardware, let’s say, because they would like to move faster for testing is greatly diminished.

Ashley: And one of the things awesome about entrepreneurs is, oftentimes, they take a problem that they’ve had in their career, maybe their last job and say, “There’s a better way to do this. Let me go create a product, create a company, go do that,” since that’s part of your story, too.

What was it that, in your experience with software and testing being a hindrance in and of itself? Were there specific problems or just, there’s gotta be a better mousetrap, or how did you—what did you bring with you from that experience to spin off and crate Functionize?

Cser: That’s a great question. It really has to do with the pain, right? Getting the call from an angry customer when a particular bug regressed for the third time that week because your processes are not there—it’s really, really painful. And so, that’s really the initial starting point.

The second piece that I would say that makes it a little bit unique is that—and we talk about this is, as well as shift left is really important so you can start testing early on. You’re taught it’s really important to shift right. It’s really important to test production environments because there’s a lot of things that happen these days when the application is literally compiled in real time in the browser with lots of real time dependencies, third parties and APIs and whatnot. So many things can go wrong in production that may not be actually code defects, they might have to do something with some other defect or some other dependency problem.

So, that’s certainly one area that I am bringing Functionize in a heavy way that, and kind of the way that I see the future going, going both ways. We’re shifting left at the same time we’re also thinking about how do we test production.

Ashley: Interesting. You know, you certainly hear a lot of shift left in test environments, not so much things happening in production environments. Are there certain, you mentioned applications kind of assembling, if you will, being brought together inside the browser at run time in a production environment.

Are there other use cases like this for testing and production? I’m really curious about this.

Cser: Oh, absolutely. There is many, and a few that I can think of that would be really interesting is a lot of the personalization that also is happening these days. So, we see customers struggle also with kind of these new style marketing capabilities that are operationalizing and these experiences that you would display to the user. It’s very difficult and challenging to test, difficult to see from the production environment. You have kind of the right data and the right experience, if you will, showing up.

And so, that also provides opportunities for us to create specific products and features within Functionize to attack that problem.

Ashley: Mm-hmm. It seems also like—this is, you know, sort of an edge case maybe today, we’ll see more and more of it, but serverless applications that are more event driven, you know, not traditional transaction driven might also be a good production test case for you.

Cser: Oh, I agree. So, certainly, that would be an early, let’s call it corner case, rush case, but I think that we’re gonna see more and more of these coming as companies mature and these technologies become more mainstream.

Ashley: Mm-hmm. Well, kinda going in a little bit of a different direction here, a little birdie told me that you all are up for some type of AI machine learning award—what’s happening with that?

Cser: Well, we were just recently nominated for the AIconics Award here in San Francisco. So, we’re excited to be part of that. We’ll be attending and seeing how it goes, obviously. We’re honored to be there and being nominated. So, that’s very exciting to see a company being recognized for the work that we’re doing.

Ashley: That’s awesome. Well, I wish you best with that and good luck. Also, I know there’s some things happening on the partnership for Functionize—what’s happening there?

Cser: Yes, absolutely. We have a lot of great traction, and so, we are about to announce shortly, probably at the end of the quarter or early Q4 a very strategic partner, one of the largest global service innovators who are partnering with Functionize and we’re going to market together which is, again, I’m really proud of the team, and it shows the work that we’re doing and the work that marketing is doing to raise the awareness of the power of AI and machine learning and how that’s gonna change testing.

Ashley: Mm-hmm. So, we’ll look forward to maybe some announcements coming up in the fourth quarter, then. Gotta keep an eye out for that. Great.

Cser: Absolutely.

Ashley: Yeah. So, I’ll kinda put you on the spot here, a little bit. If you had to say what some of the best practices that you’ve learned, not only at Functionize but also before when it comes to automated testing, you know, DevOps, cloud world—if you could boil that down to two or three best practices, where would you start? What would you tell people?

Cser: I would say that finding the right tool and building the right toolchain is definitely incredibly important. The second I would say, like everywhere else, I would say you do need the right people and so, training and understanding of how do we apply these tools is absolutely critical.

And then lastly, I would say that your strategy in that application, obviously, going back to the earlier conversation of what to test and how to test is gonna be critical for your success. And so, the tool, obviously, is primarily the area and certainly education is the second area that we’re very focused on, too.

Ashley: I’m curious [Cross talk]—I’m curious about that, too. What are the kind of things that you look at or look for in people from testing skill or kinda the skills that you invest in current staff for training and skill development? What are some of those things that you look for?

Cser: Can you clarify the question? In terms of look for in skill sets we’re hiring or skill sets—[Cross talk]?

Ashley: So, your second item that you mentioned was the kind of capabilities and skills of your staff in testing and hiring those kinds of folks, also training for testing folks. What are some of those things that you’ve learned to look for that would be good advice for potential customers or current customers of Functionize?

Cser: Yes, that’s a great question. So, I would say that, you know, obviously, like in any hiring, you want to hire bright people who are very motivated, so I’m assuming that’s the baseline.

As far as the skill set goes, I think we definitely are looking at, obviously, professionals that have a lot of experience, but also, we can really enable and help people that are primarily manual or, let’s say, much less technical today and bring them into the world of automation and really drive a lot of value for our customer base. Because we see a lot of customers struggling with scaling and finding the right technical talent. So, I would say that that’s absolutely a critical point on being able to spread this to a wider audience, but at the same time I think that having technical users in part of the project, I think, is still very important.

Ashley: Okay. Very good. Well, as always happens on these podcasts, we’ve run out of time. Tamas, thank you so much for being on the podcast today.

Cser: Thanks for having me. It’s been a pleasure to be here.

Ashley: My pleasure as well. Thank you to Tamas Cser, Founder and CEO of Functionize for joining us today and also, of course, to you, our listeners for joining us. You’ve listened to another DevOps Chat. This is Mitch Ashley with DevOps.com. Have a great day. Be careful out there.