Transcript
Martin Boyd: Okay, everybody. Good morning. Good afternoon. Welcome to our webcast today. Um, we have a pretty big audience today, which is just tremendous to see. And we have a lot of great material and great speakers here. So again, welcome to our webcast. We are going to be talking about Profisee, adaptive master data management and how it is better together with Microsoft Fabric. So let me share my screen and we can get started. All right. So just let me do a little bit of a quick introduction of of our speakers and our topic here today. And then we'll we'll dive into it. So, um, a few weeks back, Microsoft, as probably everybody who's joined this webcast already knows, introduced Microsoft Fabric as a public preview product. And if you're watching the live stream of it being announced by Satya and other folks, there was there was some, you know, strong emotions in the chat. People were very, very excited about it because I think most people recognize and a lot of the comments in the chat said this is an enormous step forward in how enterprise data is going to be managed. So we were we Profisee were part of the private preview of, um, of Microsoft fabric. So we, we had a little bit of a heads up of and we had our own opinions on how much of a big step forward it was going to be. So we thought it would be helpful for us to have a webcast here where we talk about not only fabric, but how Profisee master data management works with it and the synergies between them, why they are in fact better together. So our speakers here today, I'm going to be your your host really for this. Martin Boyd, VP Product Marketing here at Profisee. And our two main speakers are going to be Eric Melcher, who is our chief technology officer. He's been with Profisee over 12 years, and he will be doing live software demo of the software that his team produced. So that's always a good vote of confidence. I'm sure both he and his whole team have their fingers crossed right about now. Um, and also featured Speaker Holly Kelly, who is a principal program manager at Microsoft with a fabric customer advisory team. Holly has been with Microsoft 25 years in various technical roles, architecting and implementing data and analytics solutions all the way through that time. Most recently, she has been involved with the fabric team and has been since the days of its inception. And a rather suspect, one of the people who was evangelizing and lobbying hard for it to be created in the first place. Um, Holly spends a lot of time understanding customers challenges and the needs of data management and has a focus on all the things we're going to be talking about today, as well as things like data mesh and and other related topics. And so with that, I'm looking forward to to this conversation. Let's jump into it. Um, not exactly an agenda, but the things that we're going to tackle here today. Oh, by the way, before I start, there is I think a lot of you have already discovered there's a Q&A tab as well as a chat tab. So if you have any questions all the way through, please feel free to drop them in the drop questions in the Q&A. It's going to be easier for us to monitor them there, but we will do our best to answer either, you know, in written form or verbally, whatever comes up. Hopefully we'll be done in time to have to take questions at the end and make sure that we we cover as much as possible. But it's a pretty big audience. So if we don't get to everything live, we'll try and get to it later on. All right. So here's the things that we're going to cover today. First of all, what is Microsoft fabric? Some of you will have seen the announcement, some not. So we need to kind of level set on what fabric is. And I think really the important thing is what's it going to do for you? How is it going to change the enterprise data landscape? I'm pretty convinced. I think all of us here on this call are pretty convinced that it is a fairly pivotal moment in how enterprise data gets managed. And I think that the way it gets managed going forward will be different because fabric exists and is kind of leading the way in how the big cloud hyperscale companies are able to to help their customers in that regard. Then we'll talk a little bit about how fabric and MDM are better together. What are the synergies between them? How does MDM assist fabric and how does fabric assist MDM? We think there's a strong synergy, which is why we've been investing in it and working with Microsoft in this for some time. And then we'll talk a little bit about adaptive MDM, which is our flavor prophecies, flavor of MDM. We think it's the best way of working, of being synergistic and working with fabric. It changes the game a little bit in how MDM is pursued and is a little bit different from traditional approaches and as a fun cherry on top, we will talk a little bit. In fact, we'll show you where artificial intelligence fits in. I think us and everybody else on the planet is figuring out how AI makes their products better. And we will show you that with our Eric will show you that with live software in just a minute. So with that introduction to our topic, let me introduce Profisee for those of you who don't know who we are. I think it's important just to to let you know who you're listening to here. We are a master data management company. We're very focused on MDM. We're in fact, it's our only focus. We are a fast growing company. We just came off a record quarter, which was part of a record year for us. So we're going pretty quickly. And the reason to mention that is not any form of boastfulness or chest beating. It's because MDM is a topic that is only becoming more and more important to people out there in the world. The more we try to leverage data, the more we need master data management to kind of weave that data together, so to speak. And we're going to be talking a little bit about all of that. We're well known for having a very Microsoft centric architecture. We have a lot of synergies with Microsoft. I'll be talking about that more in just a second. We have a cloud native piece of software that can be deployed platform as a service in your own tenant or we can host it for you fully turnkey and automated software as a service. And we have high customer satisfaction. Again, the point here is not to be boastful, although we are very proud of this. The point here is to let you know that or to remind you anybody who's thinking about enterprise software, you're not just buying a piece of software. You're entering into relationship with a company that is going to hopefully be supporting you for for years to come. So it's important to know that you're dealing with people that you will be able to get along with in the long run. We measure customer satisfaction through Gartner Peer Insights. We have the highest score of all the mainstream MDM vendors and we also measure our Net promoter score. And for those of you who are familiar with NPS, it's really all about the number of people who would recommend you minus the number of people who would not recommend you. The average for enterprise software is around about 41, and our score is way out there on the the upper quartile on the extreme. At 78, I don't know how many other vendors go to the time and effort because it is a lot of effort to measure their NPS score, but I'm confident that none of them come close to this. So anyway, it's good background for you if you end up thinking more about how to pursue MDM. So. What we're going to be talking about here is, is our approach to MDM or rather we as a company have a an approach to MDM we call adaptive MDM. Traditional MDM tends to be a little bit fixed and a little bit kind of one dimensional in how it kind of looks at how data can be implemented. Our software adapts to your organization, your data, your requirements and your people. And in enterprise software, if you're trying to do anything other than conform to what the organization is, you're giving yourself a really difficult task because changing big organizations is a difficult thing to do. The fact that we're adaptive makes it fast and easy to implement and allows us to give much more coverage than traditional MDM does. We'll talk more about all of that. Let me talk just really quickly about our relationship with Microsoft since this is a joint Microsoft presentation. I think that's important context for you. We have if you see the synergies of where MDM will fit with fabric, it's important that you know that we are the best integrated with Azure and therefore we would say the best for Azure. We're implementing integrated with the fabric on day one. And by integrated here, it's important to point out, I don't just mean that we work to the same open API standards or something like that. If you go download fabric right now and set it up, you'll find that Profisee is actually included in some of the the menus. Our connectors are shipped with the fabric components, so fabric is aware of Profisee and Profisee is aware of fabric. It's not just a kind of an open integration kind of a thing. We'll talk more about this later. But we have we've had for two and a half years an integration with Microsoft purview, which is Microsoft's governance product. We have the first deepest and most mature MDM implementation implementation of any vendor. And you know, two and a half years later, that's still true. Our our connectors are shipped with Data Factory and Power BI. So those integrations are fairly clear. We can move data in and out of synapse easily and we are cloud native, as I mentioned before. And all of that is recognized in various different ways by Microsoft. As of today, we're allowed to talk about the fact that we are finalist for the Azure Rising Technology Award, both globally and in the US, which is a pretty big deal because there's more than 4000 companies submitted themselves for that. So this is just kind of a nice milestone, a testament to how well we work together and how much effort we put into it. We're a launch partner for Mid and we have more published reference architectures on the Microsoft documentation site than any other vendors. The most MDM implementations on Azure joint sales with purview deals on Marketplace. Et cetera. And all of that is really because we've we've had a close relationship for a long time with our friends at Microsoft. Folks like Mike Flaska, who owns the Purview product, ultimately recognized the value of MDM as part of an integrated data estate. All right. So enough on the introductions. I think that should be enough for you guys to know who you are listening to here. And with that, let me let you let me just say one final thing. We're now pivoting to, you know, why master data management and how is master data management part of what you should be doing? So Gartner said as part of their most recent MDM magic quadrant, MDM initiatives have continued to progress as a foundational component of digital transformation programs that kind of keys off. I think our belief and Microsoft's belief that if you're driving digital transformation action, which could take the form of, you know, analytics, pursuing AI, organizational improvements et cetera. Et cetera. And you're using data to do it. And who these days isn't trying to use data to do it, you need to have the right data. So MDM is all about trying to get the right data. And Microsoft fabric is an enormous step forward in making sure that you have access to all the right data exactly when you need it. So with that, I'm going to hand over to Holly, who's going to talk a little bit about fabric, how it fits its genesis and what makes it up. Over to you, Holly. Holly Kelly: Awesome. Thanks, Martin. So you can go ahead to the next slide. So what I'm going to do before I actually talk about what fabric is, is to set a little bit of context in terms of why we went down this path. So, you know, as more and more data is being produced, you know, with applications or from devices et cetera. You know, just the amount of data that's being produced ubiquitously is causing, you know, a ton of of data that's being generated. And so now organizations are faced with this existential challenge of how do I start taking advantage of this data to be able to create competitive advantage for my organization. But many organizations are also faced with the reality of where they are today. And in many cases, you know, there's a lot of legacy types of solutions that are built off of, you know, kind of older technologies. And, you know, it just provides limited scalability, lack of flexibility. Et cetera. The other thing as well is that, you know, as organizations start looking at their budgets. Budgets are not getting bigger. In fact, in our economic circumstances, they're quite often shrinking. And so organizations are not only faced with this, you know, huge, vast amount of data that's being generated, but we've only got limited amount of resources that can actually start working with that data. And so, you know, with that limited amount of resources, how again, can we start taking advantage of that data? The other big thing is that when we talk to a lot of organizations, it's not that I've got one data lake, it's I've got many, many different data lakes. And so this just generates different silos of data that cause data fragmentation, data duplication, which then can cause upstream challenges in terms of the correct insights that are being generated. If you're not being able to trust that data right, your AI and your insights are only good as the data that's sourcing them. And then we're also seeing this huge need for self-service. And so, you know, how do we enable business users or less technical users to be able to get the information that they need, but also having that shared in a deeply governed way so that we're not creating chaos throughout the organization. So if you want to go to the next slide. And this is just further complicated when you start looking at the AI and data landscape. This is just a really great visual from a company called Firstmark. And this is the reality that organizations are faced with. So, you know, data is being produced, data is being generated. We have limited amount of resources and we're faced with this vast array of technologies that are available to, you know, work within this data space. So now, you know, when organizations are making these decisions, there's also kind of this integration tax, if you will, where when you're bringing in multiple pieces of the stack and multiple different components, it takes a lot of work to, you know, deeply integrate those things. And this leads to very complex types of architecture where you've got a lot of boxes and arrows and everything else, which then leads to complexity. It leads to latency, it leads to bottlenecks. Et cetera. So if you go to the next slide. So as we started talking to all of our organizations, one theme came out very, very clear. We need to simplify this. And, you know, this is just a really great quote from one of our chief data officers. But I think this is very relevant to every chief data officer is I want to be the chief data officer. I want to focus on data and insight creation. I don't want to have to take time, resources, budget, et cetera. You know, to to provision and manage all of the complexities of just the technology within the data estate. So, you know, that refocusing effort where, you know, we're really focused more on the data and less so on the architecture. So next slide. So this is just a really great quote from Gartner as well. But one of the main patterns we're seeing in a lot of organizations is, you know, moving away from these very monolithic, you know, central i.t. Types of approaches where, you know, you've got a data team that is responsible for, you know, ingesting modeling, preparing, serving the data for various upstream consumption purposes. And so in many cases, you know, there is an additional step to, you know, take the data out of that data estate and then, you know, even prepare that data in a different way, you know, for that upstream consumption scenario. And so we're seeing a lot of organizations moving away from these monolithic architectures into more of these, I'll call it data mesh, but kind of federated architectures where the role of the central data team changes, where it's not so much on, you know, owning everything from end to end from ingestion to serving, but really, how can we refocus these data teams to really, you know, focus on the curation of my data domains that are relevant across the organization, but then enable these business domains to be able to leverage that core data and then create their own insights on that data. And the reason why is because the domain knowledge is really important to capture, and sometimes that's really difficult to capture in a centralized system because, you know, you've got a lot of different ways of viewing that data or serving that data. So by allowing the business domains to leverage the master data that your central data teams are managing and then be able to create their own domain context. On top of that, it creates this really nice synergistic relationship of, you know, data ownership versus, you know, data product creation that's used for upstream consumption scenarios. Um, and then key to this obviously just lead into a Profisee is going is, is that whole governance aspect of that right and being able to ensure that we've got the right level of data quality for those data domains because that's really going to drive, you know, all of the insights upstream. Next slide. So when we look at this. Right. And this will kind of lead into what fabric is, is that we're seeing a lot of organizations really making heavy investments in the modernization of that data estate. And so, you know, we're seeing various organizations, you know, wanting to move away from these, you know, siloed solutions and siloed data that exist within an organization, but really pulling that into a common data foundation that is SaaS based, you know, where we're managing all of the underlying infrastructure, but you're able to have that deep data creation, deep data governance, you know, around that data and have it located in a single place. Um, also, you know, as we look at a lot of these federated architectures without governance, there's a lot of risk in terms of data breaches, and data breaches are never a good thing. And so, you know, with fabric and kind of this whole modernization effort is how can we start leveraging a lot of these just built in governance and compliance and security components that are automatically there within fabric? Um, the other thing as well is that we're seeing a lot of organizational shift into, you know, kind of more of these business domain data stewards. And so the challenge there is that, you know, we've got a lot of maybe data engineering talent within the central IT teams, but maybe that talent doesn't necessarily exist out in the business domains. So how can we make that easier? So instead of having to have a ton of technical skills to be able to work and deal with data, how can we enable more of a self-service type of way where again, you've got all of these data level components underneath the seams, but you know, really being able to enable business domains and business users to have that agility in the insight creation without having to go through, you know, kind of that central I.t team. And then the biggest thing as well is we talked about, you know, the integration tasks, integration tasks. That is just a reality today. So, you know, moving away from all of these, you know, varying different vendors for different purposes and different classes of products really into, you know, a full analytic suite with cost transparency so that you're able to manage that cost across the organization. Um, if you want to move to the next slide. Martin Boyd: Okay. Holly Kelly: So if anyone had listened to the build announcements a couple of weeks ago, actually it was about a month ago now, you know, we talked about fabric. And so the slides that I had put up before really kind of set the stage of really why we went down this path with fabric. And so what fabric is, it's a truly end to end analytics, data estate, where I can have different personas collaborating on these different solutions, even coming from different backgrounds, different skill sets, etcetera. So Fabric is a SaaS based foundational tenant where basically, you know, when you go in and you want to provision a new project, I don't have to go in and set up Azure subscriptions and resource groups and all these different compute pools. Et cetera. I can simply go in, I can create a new workspace. All of these services are already pre provisioned and configured for you, so you are able to truly focus on insight creation within the first few seconds versus, you know, having to have that delay or bottleneck of some of these, you know, more technical types of infrastructures. So within fabric, we're basically taking a lot of the best of breed engines that we have in Synapse today. But really modernizing that for, you know, this whole new world of data. And the key point here is that we're moving into a completely lake centric type of architecture. So one of the challenges, you know, with today is that in many cases I'm having to duplicate data even into things like fit for use containers for performance. And so if I had, you know, data in a lake and I wanted to pull it into a warehouse, I'd have to copy that data. I have to manage the pipeline, I'd have to re manage security. Et cetera. So what what we're doing here within fabric is moving to this unified, open architecture built off of the data lake. So every single workload within fabric reads and writes data into this underpinning of one lake, one lake you can think about as a SaaS delivered data lake for the entire organization. So this is really how you get to that unified data foundation that you can now provide that deep data governance, you know, within all of the data across the different workloads. So and again, we have all of the different capabilities from an end to end analytics stack. So data ingestion, so being able to ingest data at scale, you know, data engineering to do things like big data analytics, maybe working on unstructured or semi-structured data, you know, data warehouse components where I'm familiar with working with databases, schemas, tables and views and stored procedures, you know, data science, real time analytics, et cetera. So all of these different workloads sit on top of this SaaS foundation. And then because of that, like I said, you're not having to worry about all of the upfront provisioning of the infrastructure. It's just now baked into the platform. And again, at the very bottom here, SaaS experience, you know, having that deep data security and data governance built in unification of compute and storage and then also, you know, having a common business model because everything is billing against the same compute surface. So you're not having to manage multiple pools of compute. So that is a little bit of what fabric is. So I'm going to pass it back over to Profisee and let them talk about how this all fits together. Martin Boyd: Perfect. Holy thanks. That was a tremendous overview of of fabric. Let me paraphrase what Holly just said there and say from our perspective, fabric is is the new home for all enterprise data. Just like a few years ago, Microsoft introduced OneDrive and it became the home for documents and allowed us to collaborate on documents and all the things that we are familiar with now that will be hard to live without. This is going to do fabric is going to do the same thing for enterprise data. You have the integrated tools that that Holly just described. And importantly, now you have one place where it's all going to be either stored or referenced so that in some words, on Holly's previous slide, it said that it's going to start to overcome data silos. That is absolutely true. Does it mean that you don't need anymore? Well, you're going to be unsurprised to hear that. We don't think that it that it replaces MDM. So here's how they kind of fit together and why they're still a key requirement. So let's assume we've got a whole bunch of data over here coming from Dynamics, Salesforce, Oracle Legacy applications, cloud applications, whatever they all are. We've got a great tool set like Data Factory bringing all of that data in physically or virtually into one lake. Now we have all this data. We know where to find it. But if we have Martin Boyd in one system and Boyd in another system, someone still needs to adjudicate is that one person or two? Is that one customer or two? How is it going to impact our analytics and our transaction history and all of that kind of good stuff? So if you're pulling in data from many siloed sources, it's still going to be inconsistent and incomplete and duplicated, which. Is exactly what you would expect because this data over here was created without the benefit of any kind of unified data governance. And so, you know, there's going to be disparity in the data. That's just how it is. And that means it's going to be hard to assemble and use the correct information when you're using all these integrated tools to to provide analytics or use generative AI to to generate insights or whatsoever, you're going to have a whole bunch of stuff here. But is it going to be the right stuff? Questionable, I would say. So. Here's how we here's how we help with that Profisee. Master data management is there with a whole set of tools and capabilities to allow us to handle that data better. Things like match and merge data quality and workflow all built upon common master data models and then data stewardship layer on top of that so that we want to automate absolutely everything we can in this middle layer here. But if you can completely automate 80, 90, 95, 98%, that is fantastic, but you're really never going to get to 100%. You still need data stewardship sitting on top here so that human beings can, when necessary, interact with the data for oversight or adjudication or enrichment or correction or whatsoever. So the whole Profisee environment here is integrated with Microsoft purview, which we talked about I talked about before, is Microsoft's governance platform. So whenever we create master data or master data entities in Profisee, they're moved into the or the copied, I should say, into the data catalog. Anyone working in the governance platform on standards and policies can do that in purview. And then all of that information is available here in Profisee. So this too, we integration allows us to enforce the data quality standards that are being set by purview or, you know, in some cases other platforms. But certainly purview is the one we see most often. So we've had this, as I said before, in place for two and a half years. It's still unmatched by any other vendor in terms of our ability to work with this platform. So how does that work with with fabric now? So we have all of these data assets in fabric. We now know where to find them. They're now more available than they were before. We don't have to be the chief integration officer, as Holly said. And so we can move that data or copy some of that data into Profisee, move it through our MDM carwash, so to speak, and publish master data, golden record master data back into fabric. So now we have a proper record and understanding of who our customers are or our products or locations or assets or whatever it is that we're trying to master. That's part of our use case. So now we have high quality trusted data that is complete and consistent and accurate and ready to use. And so the insights that we generate are going to be the correct insights there based on trusted, reliable data, not just whatever we happen to have lying around from our raw data feeds. So hopefully then it's clear that Profisee is better because it's easier to to grab this data and have a place where it's all kind of centrally located and managed and it's easier to do the analytics and all that kind of thing. But but fabric is better because it's getting to take advantage of properly consolidated, trusted data. So both things better together, very strong synergies there. So if you've. Followed all that logic and you agree with it and you think, well, probably is something I should think about, then I think it's important to recognize that some of the MGM requirements are now a little bit different than they might have been a few years ago. Traditional MGM, I would say, was relatively fixed and narrow. So fixed meaning quite often there was prebuilt data models for customer or product or whatever. So you had products called customer 360 or product 360, and that was really kind of all they did. But the pre-configured domains that came with them and the use cases they were designed for didn't fit well, continue to not fit well, not flexible. So they just if you don't fit well then you have either have to refactor your organization to fit them, which is a bad idea or you spend a lot of time, six, nine, 12 months whatever, trying to reconfigure them to fit you. That's just generally a bad place to start. Generally too technical for business users because you need to engage the business users, the data stewards, as I mentioned before, and only tackling 1 or 2 domains and not covering the full breadth of the data estate or the fabric, which, you know, everything that Holly said there. No, no. Part of it said we're only tackling one domain. It's all about tackling the broad coverage of your whole data estate. So how do we tackle this? We have this approach that we call adaptive MDM. It breaks into adaptive stewardship, adaptive rules and adaptive coverage. I'll step through them. So adaptive stewardship is all about making sure that the stewardship experience is very well customized and and strained and curated for the specific role, task or responsibility of our data stewards. So here are some examples of tasks they might have to do. Data Customer review, Product Maintenance Manager Approval cetera. Et cetera. Having these things be quickly built and deployed in a way that is very well tailored to to the steward maximizes engagement and minimizes their effort. This is also where we are starting to use AI that Eric's going to show you in just a moment. This these vast apps have been available in our product for a long time, but they really come into their own. I think in this world where we're trying to get broad coverage, we also have adaptive rules. So the idea of all this, this middle tier here, the match, Merv, survivorship rules should adapt to the source systems and their requirements. Data quality rules should adapt to your governance requirements and workflows should adapt to the stewardship roles that are going to be adopted by the stewards, not the other way around. An adaptive coverage is all about making sure that we're able to cover everything we You should be able to rule out your reference data and master data domains quickly, but in the order that makes sense to you so that you can cover your whole data estate as quickly as possible to maximize your ROI. Okay, that was a very quick breeze through. But now more importantly, we're going to Oh yeah, which delivers better engagement, faster implementation and better coverage and ROI. So now let's switch to Eric and he can show you what some of this stuff looks like in real life. Let me stop sharing and let Eric get started. Eric Melcher: All right. Thank you, Martin. Just quick confirmation. You're seeing my screen and can hear me. Martin Boyd: Yes. And I'll switch off the video streams just to give you a bit more space. Speaker3: Perfect. Eric Melcher: All right. So what we're going to do now is a walk through of sort of a day in the life in this new Microsoft fabric world. So starting off here, my role is I'm a data analyst and I've been asked to do a top customer analysis. And so right now, I'm in a Microsoft fabric workspace. We can see that here. This little diamond icon indicates that this workspace has been enabled with fabric. And here we can go and see that we have available to us all the various workloads in the Microsoft fabric experience. Inside of this work workspace. I've created a data warehouse. I've loaded in some some some data from my one lake and leveraging that in my data warehouse that's allowed me to build out this Power BI report, which is a top customer analysis. And so here we can see my goal here is to understand sales by geographies who are our top customers. And here we see a lot of the common data quality issues that that organizations run into. So for example, I've got a lot of customers that don't have the state value populated and therefore I wind up with my my top sales geography being blank. It's empty. And so that missing information is skewing my data. And then over here I can see a stack ranking of my my customers. But if I sort this list by name, I pretty quickly see that there's some pretty obvious duplicate records in here. Now, a lot of times organizations the solution is, Oh, we'll do like a one time data cleanup. I'll come in here and manipulate the data and that might work for this relatively small sample data set. But in the real world, what you really need is a permanent solution to this problem, something that can improve the quality of this of this information on an ongoing basis. Now, we kind of view fabric and Profisee and purview. These three products together should collectively create a whole ecosystem that we can use to solve these problems. And so it really begins with purview. So purview is Microsoft's data governance platform. And I saw some questions in the chat about people wondering what the difference is or what the interplay is between Profisee and purview. Purview is a governance platform that should be that spans everything, not just Azure, but other clouds on premise, etcetera. And Profisee as an MDM platform is integrated with Microsoft purview. So going back to Holly's comment about the Chief Integration Officer, what we want to solve for MDM is that our product natively integrates with these other complementary Azure technologies. So purview is role here is obviously to sort of have have the broader data state catalog and in the data map and the Profisee being integrated with it, we also show up here as well. So in this scenario, what purview provides is the ability for business users to discover the information they need. And here I'm looking for some better customer data to to help me with my analysis. What I see here is a certified customer data set that was that's an asset that was registered by Profisee and can now be governed within purview. So we can have descriptions, we can see classifications. We can see here that the US state name classification is in here. We look at the schema, we can see what information is available to us within this data set. I can look at the lineage and I can see that that this data set is going through prophecies matching process and creating golden records. And I can see that the state value is populated across that process. I have any questions, I can figure out who to contact. And most importantly, if I say, hey, this is in fact something that I think would be helpful, I can go ahead and navigate from purview into the Profisee experience where I can look at that data. So now I've gone from I've just made the leap from data governance to a data management solution. Here we can see the bidirectional nature of this integration. I can see all that same governance information available to me natively within the proxy experience. And if I wanted to, I could actually click the link here and navigate from privacy back to purview. So now we've been able to use purview to find the information we need. We're actually going to do here for a moment is switch personas into more of a data stewardship persona that's that's using the proxy platform to improve the quality information. So we're back here on the home page of this Profisee application. From here, I could just go and search for and find any entity or hierarchy that I'm looking to manage directly or in this case, I've actually have some fast apps which are pre-configured applications that provide much more of a curated experience for a data steward. And we'll start with here is we'll jump into the customer fast app. And this is a place where I can go to actually manage and improve the quality of customer data in my enterprise. So from here I can quickly identify certain records that I'm interested in. I want to see some proposed matches from JD Edwards. I can look at those. In this case, I want to look at one of those customer records that I saw have a lot of duplicates within my my original Power BI report. So here I can search for Lana. I see a number of different records that have Alana in them, and if I open up this golden record, I can actually see all of the various and sundry source records that are that are represented across my various enterprise applications that really all mean the same underlying customer in the real world. If I want to understand how this match group was formed, which records match to whom and with what score. So to understand really why this group exists, I can click on the visualizer here and this allows me to actually open up and view this match group and understand in this case, this record from SAP seems to be the record that a lot of the other records match to. I want to dig into the actual by click. I can drill into the details of of these matches and identify why the group formed. In this case, I actually have a record in here that was manually moved in that I should probably move out. But once I'm happy with this group and I'm happy with the records that are in it, I can then move on to survivorship. And with survivorship, what I can do is I can I can really create these golden records and populate them with the best available information from across the source records within this group. So that's an example of how Profisee can solve the original problem that we saw in our data set, which was just identifying and grouping together like records within my enterprise. We also have data quality capabilities. Another question I saw asked in the chat in this case, I can go ahead and click on this record that has a data quality issue. I see that the problem here is that a phone number is required. I can open up this record I see indicated to me where that problem is. I can go ahead and enter in an updated value and processes. Data quality engine is always running these rules real time in the background. So as a part of saving that record, I've addressed that data quality issue and now I've got one less problem in my enterprise data. Now, as Martin mentioned, Profisee is a multi domain solution. So I started focused on on customer data. You know, product is another very common domain that customers are looking to manage. So we can look at some other capabilities in the in the product through the prism of product data. So again, I can come in here, I can use this to filter and find particular records that I'm interested in. And one thing that is important in the MDM space is the ability to manage relationships in your data. So in this scenario, I've opened up this individual product record, and now what I'm actually looking at is an interactive graph of the relationships for this particular product that we started with. And from here I can actually navigate around the relationships in my in my solution. So in this case, maybe I want to go look at, you know, click on mountain bikes, navigate up to the bikes category. At this case, maybe I want to go down and look at road bikes and I can click on road bikes and now I can get to see all the individual road bike products within my enterprise and I can always navigate back to my starting place so I can navigate all the various relationships in my data. And I can also use this as an actual data management experience. So, for example, if I wanted to update some properties of this bicycle product, I can open that guy up. And in this case I can see that this bike, the manufacturer has felt bike which which I know is a US based manufacturer of bicycles, and therefore this Germany association is actually incorrect. What I can do here is actually disassociate this record from Germany. I'll go ahead and associate it back with the United States. Now, I've used this experience to not just browse data, navigate, but also ultimately steward the data within my solution. These relationships can also be used to manage predefined hierarchies within your solution. So in this case, a set of relationships have been configured to to represent a hierarchy within my data. So now I can start drilling around. I can open up various areas in this hierarchy. I can edit individual records, and in this case, I can look at, you know, as I drill around, I see road frames. And so this allows you to manage hierarchical relationships in your data. And and in some scenarios you might find that your hierarchy is incorrect. I was in my power bi. I saw this individual product rolling up into the wrong place in the hierarchy. Now I have a the opportunity to actually move that and say this really is a mountain bike product and I can move that guy into the mountain mount bike subcategory and manage my hierarchy and all my relationships here as well. So that's just a very quick overview of some of the capabilities of the proxy platform from a data management perspective. Now we're going to switch back to our original persona of data analyst, and we're going to go into our Microsoft fabric environment and see how we can solve our original problem. So as far as processes, integration with fabric, we've been working with the data integration team within the broader fabric team to enable the ability to natively bring data from Profisee into the one lake and ultimately into and make that data available for use by all the various fabric workloads. And so within fabric, we have the ability to create a number of different artifacts. And one of the new things that was released with with Microsoft Fabric was this data flow Gen2 and with data flow Gen2. What we can do here is we can actually bring into our one Lake environment data from Profisee using a native experience directly within fabric. So here I can go in and I can look for a Profisee. What we see here is Profisee actually shows up as a first party connector experience that allows me to go in and basically say, This is the Profisee environment that I want to bring in some data into my one lake and I click the next button. And what we're now doing is in going out and we're connecting to that Profisee environment. And now we can browse and find the information in this case, some customer data that I want to bring into my one lake. And from here I can click, create, and I can go through the rest of the experience here to basically bring this improved master data into my Microsoft fabric environment. So this is a native experience again, that we've delivered. It's made available as a part of having a certified Power BI connector. So we've developed a connector. It's gone through a certification process by Microsoft and that allows you to have a first class experience inside of the fabric experience that allows people to integrate data without having to go swivel chair and use other products outside of the fabric experience. So now that we've brought close this out. So inside of our fabric workspace, we have this data flow here that we've created previously, and we've used that to actually bring this data in and now improve our Power BI report within fabric. So now we can go to a new analysis that we're now able to deliver. We can see the actual benefits of this improved information. So here, for example, we can see that we have dramatically fewer customers where the state's missing. So we still have some customers where there just was no address, information available. Profisee wasn't able to use the address information that was there to populate the state value. But we now see a pretty dramatic difference in that the vast majority of our customers have been assigned to state. Now we have a much more accurate view of of the of our states. We've also have brought together all of our duplicate source records and merge them in to get together in Golden Records. And now we can see that our top customer now is this customer weekend tours. We can actually see underneath that all the various source records that made that up. So this actually resulted in a 33% reduction in the number of distinct customers we have as an organization pretty dramatically increased the average sales value across those customers. And now a lot more of our customers are meeting our original sales target that our analysis was based off of. So this is now a dramatically improved analysis that I can now provide back to my business stakeholders. But what's also interesting about this is using Proxies Connector for Power BI. We can actually build out rich, more data management centric dashboards in this case that allow us to actually look at our data in the context of our address data quality or the value, the quality of our email results or data quality issues, etcetera, and actually use Power BI to actually provide a dashboarding experience to add context to the master data itself. One thing that's really cool about this is now when we go back into our processing environment, if we go back and look at my customer analysis, not only can we create this highly valuable content within the fabric experience using Power BI, but then we can embed that power BI content back within the data stewardship experience. And so what this allows us to do is now to, to bring transactional data from our 1 or 1 lake and our data data warehousing environment and bring that information back into process and actually use that transactional data to add context to the data stewardship experience. So now I can use this as a dashboard to help me understand where are my data quality issues? Who are my top customers? Where should I be focusing on? We can actually navigate, for example, from this weekend tours Golden record to the actual weekend tours record that's under management within Profisee. So we can open this up, we can see all the details. We can go here, we can open up the the match group. We can see all the details of that match group. So we've really created a round, round trip, fully integrated experience between Profisee, between purview and the Microsoft fabric products. So hopefully that helps, helps everyone understand the sort of the art of the possible between Profisee and these other Microsoft products. We would like to do is wrap up here now with a demo of our forthcoming capabilities. So we are in the process of developing a native integration with the Azure OpenAI service. And the goal in doing this is to allow data stewards to seamlessly leverage the power of these large language models without having to actually be prompt engineers or understand how to how to programmatically ask good questions. We're productizing that experience so that I just sort of happens naturally as a part of the data stewardship experience. So one of the things we're doing here is we're adding the ability for natural language search. So in this case, I'm looking for a particular set of bicycles. But I can do is just type in a description of what I'm looking for. And what this will do is actually call OpenAI and it will return back to us a filter based off of what I was looking for. So we're doing a lot of work to help to ground OpenAI and the the underlying context so that OpenAI is able to return back to us a filter that represents what you're looking for, that's grounded in the information in your system, right? So it understands that we're mountain bikes is a viable category. Black is one of our colors, etcetera. So just making it easier for people to find the data they need without having to really construct a filter and reduce the number of clicks. Another common use case in the master data management space is we've got some unstructured data and historically users would have to interpret that unstructured data to then create a product record. So to demonstrate this, we have this bicycle that exists on on Walmart's website and the scenario I'm a data steward and we're a distributor of this bike. I need to create a product record that represents this bike, but I don't have to read through all this information in order to distill this into an actual product record. So what I can do here is I can just copy the details of that product and we're now doing is sending this information as well as the broader context of what we're working with, which is product data. And what we get back is a significant time savings. As a data steward, I don't have to sit here and crawl through this data and figure out what the name of the bike should be. I don't have to interpret that. The color is light blue, but the standardized color that we display on our website would map to blue and prophecies indicating to the user the values that actually came back from OpenAI. So I can I can oversee this and determine what, you know, whether or not this is good information or not. So, for example, I might say that this weight seems a little bit high. As I look up here on the data, I see that that was the package weight. So now what I need to do is figure out the appropriate weight is. But at the end of the day, I've dramatically reduced the amount of time that's going to take me as an end user to enter this information and let OpenAI do a lot of that legwork for me. You know, another similar but different scenario is, hey, my organization, we're doing business with other organizations. We need to create a new customer record here. And instead of me having to go research and Google around, I can just type in Microsoft.com and we can go out and leverage OpenAI to basically bootstrap this record, creating a lot of the basic information that we need to fill out this record. And now I can come back in through as a human and oversee this. But at the end of the day, the amount of work that I have to do is dramatically reduced. So these are all capabilities we're excited of. They're part of a long term roadmap we have around OpenAI. There's there's a number of different integration patterns that we have planned with OpenAI where we intend to automate all facets that we can of the data stewardship experience, then bring those same capabilities to the admin experience where we can sort of auto configure the platform as you go and reduce the amount of effort and time it takes to to actually set up a solution and ultimately leverage OpenAI on an unattended basis where we can in, in in bulk use the OpenAI service, but in doing so be able to discern between the sort of data that came from traditional sources and the information that came from the OpenAI service. So you can use them together in a responsible way. So that was everything. I had prepared the demo. I'll hand it back over to Martin. I think we'll have a little bit of time left here for Q&A. Martin Boyd: Perfect. Eric, Thank you. Let me share my screen and we'll drive towards a conclusion here. I have just a little bit of, you know, a conclusion slide here. And then, as Eric said, we will get to all Q&A. There's been a lot of Q&A being or questions being answered in the chat already. And but we'll we'll find we'll see if we can get some more. So hopefully what we've been able to do here is, is go through these things. We talked about what is fabric, We talked about why are they better together? And Eric showed you and we talked about why adaptive would be the right way of pursuing MDM in the context of a broad data estate, a data fabric. Et cetera. Et cetera. So hopefully that was all clear and not only clear conceptually, but. But you were able to see it. So final slide here. If you want to find out more about Profisee or, you know, our platform, you can go request a custom demo off our home page or speak to sales. And by the way, we host a weekly webcast, which currently is on Wednesdays. There'll be one tomorrow where we'll cover a lot of the same stuff and you get to ask more questions. Et cetera. You can register for that here at the same spot. And a couple of other things. Just to mention in closing here, if you're interested in all of this stuff as a technology, we've mostly been talking about technology here today. There's a lot more to this than just software. And so we spend a lot of time trying to communicate best practices, strategies, etcetera, etcetera. We have a gentleman name of Malcolm Hacker who for the past three publications of the Microsoft sorry, not the Microsoft. The Gartner Magic Quadrant was the coauthor of it. He now works with us here at Profisee and he's very much an evangelist, talks a lot about best practices, data strategy, etcetera. He has a podcast called Cdio Matters. So if you spent the time to come to this, I suspect that you would very much enjoy the kind of content he shares there. And he does a lot of work on LinkedIn and live office hours type events and things, so you can engage with him and get effectively a consultation with Gartner for a much better cost these days. And also I want to mention that on September 14th of this year, we have Data Hero Summit. We're planning on having a lot of our customers speaking and talking a lot about a lot of the things we talked about here, as well as governance and many other topics. So with that, let me stop talking and get to some questions. Holly, one of the first questions that came up was where does purview fit in and will purview be part of fabric? How does how does that whole strategy come together? Holly Kelly: Well, I was just writing an answer to that, so sorry. Okay. So there is some integration today. So basically you have the ability of hosting your purview data hub within fabric. And there's a link in the documentation that was put out as the with the resources. So, you know, there's a whole section to kind of walk you through that. We're also doing a bunch of integrations with things like MIP, labels, DLP policies, those sorts of things. And then when we go GA, we will automatically scan the fabric artifacts and that will be used to populate the data map. Now that being said, is that, you know, there's a lot more integration that we that we're looking at and that will likely be post GA. But the great thing about fabric. Is that because it's a SaaS based platform? You know, the next version after GA is going to be the week after and the week after and the week we'll do, you know, releases. Will show up over time. But absolutely, the intent is to have, you know, that deep purview, integration. You know, in order to in order for you to govern your data within fabric. Martin Boyd: Fantastic. Eric. Couple for you. Can you just tell us what version of the software you were demoing and when will the stuff be available? Speaker3: Yeah. Eric Melcher: So the version I was demoing was a prerelease build a beta build, if you will, of our next release. So the next release will ship in about six weeks and that release will have the sort of all the OpenAI components in it. And I saw a few other questions around OpenAI that I'll kind of bundle up. So there's some questions around the mechanics of it. So the way our OpenAI integration will work, it will work regardless of how you deploy the platform, whether it's SaaS or PaaS or any deployment model. And what we're doing is basically integrating with your Azure OpenAI service. So you control the Azure OpenAI service, you control the models that are in that service, and then we're basically leveraging that via an API. So Azure OpenAI service has a set of APIs we're leveraging. So it puts the power in our customers hands to kind of control the models and control the usage. But the more productized and the actual integration with the Azure OpenAI service via its APIs. Martin Boyd: Just to confirm, Eric, there you were answering the question about AI, all of the stuff previous to the fabric integration separate. That was a that's the current release software. Yes. Eric Melcher: Yes. Everything else was, was GA. The only thing that was in fact all the demo except for OpenAI was on releases. So there was no there was nothing there that wasn't available today, correct? Martin Boyd: Perfect. And Holly, just since we're talking about release dates, there was questions about when will the when will fabric go? Ga. Holly Kelly: Yeah. So fabric is slated to GA at November of this calendar year. So we will announce GA at that time. And then as I mentioned, you know, GA for all intents and purposes is basically a label. We will continue to innovate. We'll continue to do, you know, bug fixes, performance fixes, feature releases, et cetera. On a very consistent basis. Martin Boyd: Excellent. Eric, one for you. We showed purview. You showed purview here. How did we work with other governance products like Collibra, Alation, etcetera? Eric Melcher: Yeah, we have look, we have customers that use every governance product out there. You know, today purview is the data governance platform that we've developed an explicit integration with. But we certainly have customers that are successfully using Profisee alongside virtually any mainstream governance platform in the market. So they can be they can be leveraged, they can be used together. You know, purview is the one we've developed an explicit integration with. You know, long term, it's likely we'll develop other integrations. It's a big part of our strategy is to focus on MDM and then to integrate with leading technologies in some of these adjacent spaces. So, you know, if you're a collibra or alation, etcetera, organizer nation, you know, we'd be happy to chat about other customers that are using that product with Profisee and even potentially introduce introduce you to them so you can understand how they've achieved that. Martin Boyd: Excellent. Holly, one for you. So you've already answered this a little bit and it's probably premature given that it's still a prerelease product. But what are the big future development goals for fabric that would help customers adopt fabric? Holly Kelly: Well, that's a question. Martin Boyd: It's only a little question. Holly Kelly: Um, I mean, I'll just I'll just say one thing. So fabric is one of the biggest initiatives I've ever seen in my 25 years at Microsoft. And, you know, it took a lot of organizational change, leadership change. Et cetera. So, you know, what? What you're getting now is primarily around, you know, the analytic capabilities. Um, you know, there's a huge broad vision of, you know, fabric kind of being that center of gravity for all data. So operational, analytic, right, all of the different components. And that's, you know, very much a long term vision. And it's going to take us a while to to get there. But I think we're making some really great strides just on the analytic side and just the way that, you know, we completely rethought how, um, you know, really kind of moving with this, you know, future proof, more modern architecture that truly and fundamentally, you know, has the potential to grow. Um, and so, you know, I think that's, that's just the biggest thing that I can say is that this has a ton of executive support. It's got a ton of executive visibility, lots of, you know, changes. But they were needed changes. And, you know, there's there's a lot of plans ahead. Martin Boyd: Perfect. Thanks. I think that's a great place for us to stop. We're at the top of the hour anyway. There was a lot of questions already answered in the Q&A and chat, anything that wasn't answered. There are still quite a lot that weren't answered. We will get back to you. And I just want to thank both Eric and Holly for helping present this. This is one step on the path for us. This is an ongoing development and our ongoing way of helping customers get better value from their data. So thank you very much, all of you, for your time and attendance time and attention. And we look forward to speaking to you again in the future. Thank you so much. Eric Melcher: Hi, everyone. Thanks.