Transcript
Hi, Mike Matchett with Small World Big Data and I'm here today talking about a side of it that sometimes we don't explore that heavily, but we really should. It's about software development, and particularly getting software from its rough, buggy debugged full of garbage stage into clean quality production code. And we do that by testing and making sure that what we're putting out there is clean and good. However, application software has gotten bigger and bigger over the years. It's gotten more complex. It's gotten hybrid. There's so many components to it that doing a quality job of testing and making sure that our code is good is is really challenging for test engineers, particularly in complex CI, CD pipelines and such. We have got Lambda tests here today, though, to tell us about how some of the the best and brightest enterprises are tackling and helping software development testing get good, I guess you would say. So just hold on a second and we will be right back. Yum, But. Hey. Welcome, Moody. Uh, welcome to our show with, uh, bring Lambda test along today. Uh, we've got some interesting things to talk about concerning I, I think maybe that's why a lot of people are probably watching this today. But before we get to that, we got some other territory we got to cover first. Just introduce yourself and tell me a little bit about how you got involved and got interested and excited by testing, because it seems like a lot of people testing comes last and it's this last thing. But what excites you about test? So first of all, Mike, thanks for having me over here. Uh, really honored to be part of this show. Uh, I am I'm one of the founding team members and currently head the marketing and growth part at the company called Lambda Test. So, uh, testing is something that is pretty important part of the development cycle. And this is something that, uh, the co-founders of my team have been doing before Lambda Test as well. They were seasoned testers. Then they started their own testing agency. After they sold off, they started to build up a product that can help the broader testing audience. And that is what we have been doing for now, nearly seven and a half years, building up, scaling up a test platform. So we started off with the test execution and orchestration space, created a platform that helped people execute and run those tests very efficiently across different operating systems, browsers, mobile devices, different type of environments. Basically. And over the past few years, we have started to innovate in that area, started to add on more meaning over the just execution engine that can help you do better analysis, that can help you do better test planning, and that can help you do, in fact, more efficient test execution give you a single integrated ecosystem where you get the result of all that testing at a single place. So basically, as you highlighted, testing has been a place which has been most overlooked in the overall development cycle. And that and unfortunately because of that, that is also the place which is the biggest bottleneck in the fast release pipelines. And that is the problem. That is the challenge that we have. Uh, we wanted to solve and not just for testing, not just for the engineering team, but the overall development team itself. In fact, I would say, because as you mentioned, you were a product manager earlier, this was something that would be, uh, most important aspect for you, that how to release fast without getting a bottleneck at the quality assurance process. So that's what we have built up. That's what we have been building. And recently we have added the AI features to make the life of testers and the overall testing teams, in fact overall development teams easier. All right. So before again, just a little bit before we jump into the AI part, uh, when we talked about looking at test. And I'm glad you brought up some of those points. Right. To just recap that right. So we also think about test, uh, having some challenges and the QA engineering having hematologist of of I think you mentioned scale. You mentioned security. You mentioned the complexity of tests. Uh, what do people in QA really run into? Is some of their bigger problems today that, that, that no one was actually helping them address? You mentioned a couple of those, but you can just go in a little bit more detail on that. So, uh, when when we started to talk with enterprises, we started to talk with, uh, even startups. So we have more than 2 million users at our platform. And, uh, I past seven years we have been interacting with them heavily. Uh, we started to cut down, like, what exactly are the problem statements that is coming in the quality assurance cycle right now? And it's usually not the testers themselves, but the things around that part. So for example, uh, we talk to enterprises, we did a survey, did a talk to nearly 1600 enterprises. And we got to know that more than 88% of them are using CI, CD, tools, CD pipeline some way or another in their process. But only 48% of them were triggering their automation test through the CI CD tooling, even though it is very much possible to do that. Now we have started to. This was a big gap. Started to analyze why that is happening. Then we got to know even those 48% that are running. That is happening because the testers were able to integrate their test over their rest. They are not there. They are not triggering their test through CI, CD, because testers were not empowered or provided the right skill sets or tool sets to integrate their test with the CI CD pipeline. So nobody bothered about kind of adding over there or triggering the test. Now, this was just one of the challenges, for example reporting. Now once the test is executed, all the artifacts has to be collected together. All the data has to be collected together and then stitched together to create the right observability. There are tools that can help you do that, but the challenge comes down. Who helps you do that as well? So testers themselves, quality engineers themselves were not just required to create test, but also do require the end to end solution around that. That integrates with the CI CD pipeline, that integrates with the observability tooling and a lot of things over the top. So they were required to not just do testing, but had to be well versed with a lot of other skill sets as well, which, to be fair, is a little bit challenging. So we try to solve that part. Yeah. Yeah. So so it looks like, you know, just just a thumbnail sketch if I sit back. Right. There's lots of other things that people think of when they build a CI, CD pipeline. And, and you know, you know, all these, all these different components that developers will bring along and the different components of the team will bring along. And this Lambda test and the solutions you have are really the tester pieces of that CI, CI pipeline. People should think of it that way. They should they should elevate when they're designing CI, CD pipelines. They should have componentry specifically for the QA engineers. Well, the QA engineering function. And we'll talk about that. Like when we get to AI that that broadens this out quite a bit. But it, it, it really says, you know, I shouldn't just have testing patched on to the end. It's not it's not a gate to pass through that somebody else does. It's it's a it's a big component of the pipeline itself. And we have a solution for that. So that's that's kind of how I'll leave it at that. Um let's switch let's switch gears. I'm sorry to do this to you. It was just in the interest of time. And talk about the latest thing you guys have been doing with all this experience. You have running over a billion tasks for people in the last couple of years. Uh, and putting this into what everyone's favorite topic today, uh, you know, gpts and bringing that, bringing that knowledge back around to help someone in the test world. How does that work? What exactly are we helping someone do by bringing generative AI back to test. So we have been building up and adding AI capabilities in our platforms for some time. We started off with cognitive AI I aspects help you identify issues through your last test execution data. So I is something that is not new to us. But as I highlighted, we wanted to create a platform or solution that can help out not just the testers themselves, but the overall software development team. And one of the biggest use cases that we saw thought of the GPT based platforms was to democratize the overall quality assurance process. So we created a tool that, uh, created a platform that helped you create tests using natural language processing. So a software testing agent that can be commanded through your natural language, like simple English commands to write end to end testing for your web and mobile applications and run them at scale, run them at Lambda test platform. And that not just empower testers the right tooling or developers with the right tooling, but also help involve other members of the team, for example, a project manager or a product manager can now write quality assurance tests or quality assurance scripts using natural language and ensure that the output of the build or the output of the software is meeting the state standards or not. So kind of doing a closing the loop, bringing the overall testing process in fact much left. So even the design phase now people can write their test and plan their test and create final test cases for the application. Right? And this when this when this capability comes out. This really enables more people to participate in testing. Right. This is something that, uh, the kind of opens up testing to all the stakeholders in a way. Exactly. So. And another feature that I want to highlight. So it's not just, uh, creating low code, no code solution. We are ultimately creating code in the back end. So whatever prompts are generated because of the natural language, what are the commands are generated, can be exported in the programing language or the framework of your choice. And we have provided two way sync so that code or in fact your automation current automation code can be synced with the platform. And your let's say selenium code would be modified sync together with AI. And now you can evolve that selenium code using our platform. So it's not just for uh, the early or entry level users. It's also for advanced testers and developers themselves. So they would be able to do an evolution of their testing, evolution of their current code base, or if they, let's say half of the team prefers to write code, they can just do the code part from their coding aspect, import it back in. The rest of the team can evolve those tests using Lambda test or the other way around. Write new test cases, add it to the build. So this two way sync helps bridge the gap between testers, developers, DevOps, and even the product teams. Helps them be a more collaborative force in doing the quality assurance process. Yeah, I'm glad you you said collaborative because I was searching for the word here. When you become the the language and the platform that helps people of different perspectives, different orientations, different skill sets actually work on the same project. That's really powerful. Exactly. Uh, yeah. And being somebody can look at the browser in the front end and say, well, I, you know, I want to type in because something because I wanted to do this. But then somebody on the doing the back end testing can say like, well, in really the, you know, the error codes I'm getting should respond this way, right. And have that actually marry in the middle, uh, in the pipeline effectively on that. Uh, so just let's I mean, this is this is all great. And I'm sure the proof is in the pudding here with people that should try this. But let me just ask you to sort of throw out some a story about how, how much does this accelerate a process? How much does this help if somebody goes down this route and implements this? So, uh, numbers wise, one of the biggest advantage that we have is just in the execution part of things. So at this platform, we are able to help execute the current test up to 70% faster, but test authoring becomes a better option as well. And, uh, I cannot directly give the numbers right now because our platform is still in beta, very subjective to a lot of companies right now, and we are taking a lot of feedback from them. So, for example, we helped accelerate the test authoring process for, uh, one of the biggest name by nearly two x. But it is based on their perspective, their project that they are creating. We are looping in more and more people in this process, uh, opening up beta access to a lot of more companies. And because of that, we will get more data. But overall, I'm very positive about the outcomes. I feel that this is one of the things that can revolutionize the way quality assurance is being done. Uh, and not just ground level, not just things like execution, but at test authoring, test. Analysis. Adolescence that's planning itself. There are a lot of steps involved in the overall development cycle, and we will be able to help cut down timing across each of them. All right. So if if folks have complex application development going on, uh, development at scale, enterprise applications, lots of team members, this sounds like a key piece to really energize, optimize, accelerate everybody, not just the QA function, but the whole pipeline. Right. This is a way to make the pipeline flow faster. So if you're if they're trying to do agile development and going like we're constantly running out of time to do things on this, it's like, you know, here's something you should probably be looking at the long pole in the tent, right? It's the one you want to. That's the one you want to shorten every time. Right on there. Uh, so this is great. There's there's so much more that we talked about offline that we're just not going to get to here. I encourage you all who are even remotely involved in CI, CD pipelines anywhere in the pipeline, whether you're a developer or your product manager, your your stakeholder, your your QA function in some way. Release to take a look at this. Because this is something that could really get your team working better together. But I have a question for you, which is if someone wants more information from you, they want to maybe look at it. They want to see what lab tests are all about. What would you point them at? What resources would you say. Go take a look at. So first of all is of course our website itself natas.com. It's free to sign up, free to use the platform, uh, the feature that we just talked about, the test creation using NLP. It's in private beta. So there is a waitlist for that unfortunately. But you can sign up for the waitlist and we'll be happy to open up access to you. Uh, another cool feature about our website is our chat support. So we have very fast integrated chat support in the, uh, in the website. So you can go over there, contact us, and we'll be happy to help out. Otherwise, feel free to reach out to me over LinkedIn. I'm very active over there. I'll be happy to answer any of your questions that you have. All right. Uh, I see the lights are going off there. Maybe it's maybe. Maybe time, maybe time to call this one a close. Thank you for being here. Come back around when you when you get this in in GA and have some experiential results with this and tell me how it's developing. I think the audience would really like to know. Yeah. Are there good uses for AI and ChatGPT copilot in this case? That will really help accelerate what we're doing in it. And and where are those particular lever points that we should be looking for? I think this is one of them. So I thank you for bringing this to our attention today. Glad to be part of this show. Mike and I definitely will share all the insights that we get. It's still, as I said, something that is under development. Uh, and yeah, we see a lot of positive results and I'll be happy to share that positivity around. Yup, yup.