Video: Customer Spotlights | Duration: 4608s | Summary: Customer Spotlights | Chapters: Session Introduction and Housekeeping (31.37s), Evolution of DevX (221.92s), Customer Engagement Initiative (1208.69s), ChangeMon ZMF Implementation (1338.355s), Modernizing Mainframe Development (1921.845s), Johan's Presentation Introduction (2425.48s), Euroclear Environment Overview (2479.345s), Mainframe Development Environment (2556.49s), Release Build Process (2629.62s), Versioning and Labeling (2737.08s), Component Version History (2855.63s), Code Review Integration (2964.755s), Code Review Process (3123.095s), Mainframe Modernization Challenges (3426.045s), Modernizing Legacy Code (3536.315s), Implementing Private-Protected-Public Architecture (3680.415s), Future Development Plans (3899.87s), Q&A and Conclusion (4094.19s), Scratch Code Management (4216.095s), Conclusion and Thanks (4297.015s), Concluding Remarks (4455.66s)
Transcript for "Customer Spotlights":
Hey, everyone. I think we're ready to start. So I wanna welcome everyone to our second session. This is our first annual change management virtual user group. We had our last session, a week ago today. Before we get too deep into the presentation, just some basic housekeeping. Some of them may not be aware because it might be the first time you actually attended the session. This is the second session, but, the same format as we used last time. We have a q and a session. The chats have been disabled. But if you have any questions at all at any point in time, please, submit those. We have people standing by. They're gonna be able to answer those questions as we go through, today's presentation. Also, there'll be a chance to do a survey at the end of this. I think I might have failed to mention that, last week's session. So I'll I'll mention that again if you get, to the end of this. But please take some time to, I think it's about thirty seconds at most to do the, this survey. This helps us do it again for next year. We are looking to do a second annual, virtual user group, and we wanna make sure we get your feedback as best we can. Also, the screen size, I think, has a standard size, but you have the ability to go ahead and expand that at the bottom, I think, to the right. Just expand that to make the screen size larger. As I mentioned, this is our second session. We have one more week. This particular session is gonna be around customer spotlights. We have an open remarks. My manager, Ed Marushin, VP of product management. I'll be kicking over to him probably about another another couple minutes. We have two people from Rafelson. It's, Wolfgang and Stefano. And then we have, from RealClear, we have Johan Jacob. So we have really good sessions, today from our customers directly. Something I think we all encourage, you know, having customers be able to talk and be able to share their experience around Change Man, where they're headed, and some of the ex, customizations they've done and what we've been able to do with them, as well, you know, where we can. Next week's session, is very important to us. You know, we do have a road map. I was able to share a little bit of that last week, between, our development team as well as the key worker, Jeff Boatwright, was able to go ahead and share some things that are coming up in the next release, which is gonna be scheduled around the, middle part of q one of next year for a three patch two. But we're also looking at, as you can see in the third session, we're looking to be able to expand this around Versus Code and, in our, existing Git integration. Really, what we wanna do is just make sure that, you know, folks are able to come, attend that session, bring questions, bring comments. It's really gonna be unlike today's presentation and last week's, it's not gonna be a lot of content. I mean, we're not trying to have a lot of slides. It's really gonna be more of our discussions. So really hoping that, for those that tend, will. And if you're not familiar with the session, please reach out to us. We'll make sure we go ahead and send out the invites for you. So in just another minute, I'm gonna hand over to Ed. He'll take you through, his keynote, and then we're gonna transition over to two folks from Rafison. It's gonna be Stefano first, and then it's gonna hand over to, Wolfgang, and the lastly is Johan. You can tell from the minutes, it's gonna be last week, A little bit more than, sixty five minutes give or take. I think, you know, depending? on questions and what probably gonna be ten minutes, maybe fifteen minutes past the hour. Just like last week, this session is recorded. We will have these available. I think initially, I was thinking we're gonna have them within about twenty four to forty eight hours after, the sessions ends. However, we're actually gonna wait until third session, next week. And that way, we can put them all at the same time. So we're looking at probably the end of next week to be able to have all three sessions available. Again, we are recording it. So if you have to drop off the top of the hour, that's not a problem at all, and you'll be able to do the replay later on. So with that, let me head over to Ed. Ed, I think you're there. If you wanna say a few words, and then we'll head over to, Stefano in just a bit. Perfect. Thanks, Jimmy. First and foremost, it's a pleasure to be with you all today. We are we're thrilled about actually this this virtual user conference. So happy that all of you have joined us today. And also really, really happy to to have birds of a feather sessions where we have customers talking to customers because that's where excitement really happens. That's where, tips and tricks and what have you kind of get shared and whatnot. But, also, just as importantly, you get to leverage each other's knowledge in terms of making sure that you're taking advantage of better practices as you go off and do your day to day work. The thing I wanna just hammer home just right off the bat, though, is to say thank you to all of you, folks that actually work on or around, change management with large, but also on or around our z platform. So for me, the last few weeks have been spent, literally, having a lot of dialogues with our with our customer base, which has made me feel really happy. I'm the head of product management for our infrastructure modernization business unit, which I didn't state earlier. But I get to talk with a lot of clients, including with a few, last week in The UK. And, really, what we heard a great deal about was the fact that the mainframe is more interconnected than ever. Not just about big data, not just about ball mobile and blockchain and what have you. But everything from APIs and all the other favorite three letter acronyms we can actually leverage for the technologies we use to things like AI and IoT and social. It's all about interconnectedness. And that interconnectedness goes deep. It's a core competence in my mind of our mainframe, clients that they're able to go off and accommodate the next thing, accommodate AI, accommodate blockchain, accommodate big data, and really kinda do change management and really adapt to an evolving environment rapidly, securely in a high performance kind of way. And for that, I just wanna really say thank you to all of our, z people. This ecosystem lives and thrives because of all of you sharing information, sharing best practices, and really fueling those z environment. And just as importantly as we've said over and over again, if you purchase something on Amazon with a credit card, so to speak, or you're heading off to Zurich on a flight, chances are your transaction has hit a mainframe. And for that, we just say from a to z, the power of the global economy sits atop of the fact that z is able to actually do all these transactions at at millions and millions and millions of transactions per second, and really able to support the platform that is the global economy. So, again, thank you all for doing that, and thank you for being so adaptive. So as we think about, how we evolve forward, it's clear that applications need to be built and applications need to be changed and applications need to hit our major platforms. And our our developer friends have a lot of work in front of them. Lots of QA work. And really, really, our developer friends always talk about the processes are cumbersome. And really, it's a it's a high press, pressure, high stress position that they're in as all you are in as well. And so what our job here to do is to see if we can't take some of that pressure off to make it a little easier. Now let's just go talk about that. So as we think about the development of new applications and new capabilities and the adaptiveness that we spoke about just a few moments before. Why does this notion of having a developer experience that is robust, solid, lightweight, if you will, why does it matter? Well, we think there's a productivity impact. We think there's a retention impact. We think there's an innovation fueling impact. We think there's a a modernization impact as well. The totality of the improvements that we can all make as we think about improving our developer experience are substantial. They really hit us home at at our hearts at home where we live. They make sure that our people are doing the the best possible work. They make sure that the innovation moves forward quickly, and they make sure that we're all incredibly satisfied with the work we did get done. And, again, make sure that we're developing and delivering on the adaptability promises that we've made to our organizations. And so as we enter into this notion of DevX, in our minds, we've all seen this infinity loop, right, in our way of thinking. The infinity loop is incredibly critical to making sure that as you think about building, deploying, and then monitoring, and the feedback loop, all those steps kind of create, it's critical for us to sit back and make sure that we're all providing value in the areas that that we think are are critical where we can actually go off and help. And in our way of thinking, from from a rocket perspective, our place to help is in that code, build, release, deploy, and monitor set of steps in this infinity loop that we talk about all the time. And also enabling automation and AI that go along with it. We'll talk about that more in the coming slides. But, really, in our way of thinking, there are three clusters of things that we can actually help with. We'll talk about those in just a few moments. But, really, it's about making sure that you've got the ability to build out the code that you want, release it in a way that actually goes off and makes sense, make sure we're providing a feedback loop capability in terms of the monitoring capabilities that bring things to bear and allow you to leverage the information coming out of that monitoring loop to actually go off and do the iterative development that everybody wants to take advantage of. So as we think about those pieces, right, what we wanna sit back and say is, hey. There's there's some tools and infrastructure pieces that we really wanna make sure there, but there's also things like documentation things that we actually wanna make sure are are completely present. We wanna make sure that we've got some degree of continuous learning and onboarding happening in terms of our overall organizations. Right? This is how we think about DevX, so to speak. And then there's team systems and processes, which automation kind of help and enable. So the combination of these four pillars, if you will, give you the ability to go off and deliver innovation quickly with high degrees of quality, with good degrees of security, and create the feedback loop so very necessary to evolve our capabilities moving forward. So what does that mean in terms of evolution, was the question that I was asked, actually, just recently by a number of customers. What's important? What should I be thinking about it was were the questions that we were being asked all the time. And the things we're hearing from our, from our client base and from the conversations we've had, hundreds of them over the last two years, is that Git is becoming increasingly, important in terms of going off and doing product evolution, product development. We wanna make sure that Versus Code is part of the story, and we wanna make sure that the open telemetry capabilities that are out there, the the set of standards that are associated with them, are part of the feedback loop mechanism we just spoke about earlier in terms of the monitoring capability. So let's talk about this for a second. Why is Git important? Git has become one of the de facto, preeminent tool sets that are sitting out there. We wanna make sure that we're working with Git effectively. We wanna make sure that there's some degree of interoperation, say, our clients. And we wanna make sure that we can leverage those technologies, both from a repro and a process perspective, that Git actually brings. And Versus Code becomes critical in terms of essentially being the front door to a lot of our development tooling and our a lot of our development operations that are in place. And so we think this is an important piece of the story that we have going going forward. And frankly, also and finally, I should say, hotel is a key and fundamental component of how we think about monitoring as we go forward across our tool sets and tool chains. So three important sets of capabilities that we wanna be thinking about. Now all that's complemented by what we think is another piece of critical infrastructure here is the enablement of interoperation, enablement of integration across a whole bunch of tools. Because really, at the end of the day, as we think about modernization as Rocket as a company, it's about actually yes and. It's about complementing the sets of capabilities that you already have in place. It's about making sure that we don't actually call for rip and replace. It's about meeting you where you are. You have tools that you like. You have processes you put in place over time, some of which you love. And we wanna make sure that we can actually deliver functionality on top of and integrated with those sets of processes and tools that you already have in place. It's not about ripping out. It's about complimenting. It's about meeting you where you are so you don't have to go through a massive rip and replace exercise to benefit from the work and technology that we bring to bear. So in a world of you've got lots and lots of technology sets out there, and I'm sure there's even more that we have that you have in place relative to the one we're showing on the screen. And, of course, we're able to integrate with those pieces. It's about making sure that our interoperation, that our integration with these technologies is seamless so you don't notice, so your developers don't notice, so that we don't slow processes down, but the enable them to speed speed up, sped up. So you can actually deliver new innovation more quickly than before. So they can adapt to deliver very sophisticated technologies more quickly than before. And as we spoke about earlier, so you can adapt when necessary more quickly than before. That's what we're here to help with. So as we think about change management today in that construct then, we've got our mainframe based tooling and we've got our hybrid environment based tooling, and some of it's on prem or what have you. But let's just call everything non mainframe hybrid for the purposes of our discussion here. We think these are great constructs. They service incredibly well. We think as we move forward, however, that these pieces need to under operate much more cleanly than they have in the past. We know we have built some bridges, historically, right, between Git operations and what have you and other technologies. What we think we need to do, however, based on discussions with all you all, is that these pieces need to operate in a little more seamless fashion than they have in the past. And as we think about moving forward, we do think of, as this notion of the infinity loop kicks in, that our mainframe technologies and our hybrid technologies need to interoperate much more cleanly than they have in the past. That means we need higher levels of integration and interoperate and allow you, the customer, to sit back and reflect on where's the repository of record sit. And maybe it's just across both sides of the house. Where does the change process sit? Where does the, the analytics side of this sit? Where does the governance happen? Where does the release process happen? What have you? We think these are complementary capabilities, and you should be able to choose how and the way these things are constructed and deployed in your environment. And we don't wanna be prescriptive about how these technologies get deployed. What we wanna be prescriptive about, frankly, is your success. And so as we think about the evolution of Change Man, we do see a world where we essentially allow you to choose how these technologies are complementing each other and how they can interoperate in a seamless fashion. We'll talk much more about that in the next session and in the coming days and weeks as we kind of move into, 2020, '6 and beyond. But know that it's our kind of take on things based on feedback from all you all that these pieces need to fit together in a way better than they have in the past. And so as we think about that evolution, right, the complementary nature of your hybrid environment and your mainframe environment, we think there are three big essentially pillars that we actually need to go off and address. And now let me take you through them in in just a moment. First and foremost, from a coding and development perspective, we bring a lot of capabilities and technologies to bear. But, really, what we wanna make sure of is that you can choose what you want, including bringing your own or leveraging other ecosystem capabilities. We think Versus Code is an important component here, and so do you. You've told us this. And excuse me. What we wanna make sure of is that we complement what you have in place. As you go through the build, release, and deploy capabilities, we have Change Man, obviously, that you're interested in talking about today, and then our Enterprise Orchestrator capability, which it puts in the automation around Change Man and other change management technologies that allow you to link together all the various bits and pieces, including the seamless integration notion that we talked about earlier in terms of leveraging technologies like Jenkins or what have you to be more effective. And finally, as we think about the monitoring piece, we have our team on product set that really helps you with a no tell set of implementations that we'll be talking more about in the coming weeks. And also cProff, which really enables you to go off and tune your Kix environment in a much better way than the tooling that that is actually provided today. We encourage you, obviously, to go off and have some discussions with our teams about these technologies, but I am mindful of the time and and the content that we wanna get through today. So in our way of thinking, the three pillars that we can actually help you with. And, again, from a non rip and replace, meet you where you are, complement existing tools, technologies, and processes perspective, we think these capabilities will actually help you move forward more quickly than you can today. And so as we think about the evolution of DevX and we think about the capabilities we need to put together, we do think that this notion of actually go off and do worrying about the coding part or coding time as my developer friends will call it. The build and release time and also the monitoring time are incredibly critical to all of our successes. And let's remember, all of us really wanna make sure the mainframe is a core part of how we evolve forward and making sure that the millions and millions and millions of transactions that happen on this box support the global economy as we all do. And, again, if you wanna buy a book on Amazon or if you wanna fly to Zurich, A To Z, the mainframe has you covered. If you're flying, chances are you hit a mainframe. If you're passcode processing, credit card processing, you hit a mainframe. And that's where the data lives. But only under 30% of the professionals that we spoke to are able to leverage that that data in terms of actually making sure it's a core part of how they think about their strategy moving forward. And so we wanna enable these pieces through the technologies that we just discussed. So how do we think this all works? Well, at the end of the day, in our minds, this is about making sure that the mainframe or hybrid capabilities go off and deliver as one, so to speak. That they fully complement each other and they're fully seamlessly integrated into your environments. So the z is a core part of everything you do every day and not just a piece, but a fully integrated wholesome piece about how you deliver the value you deliver all the time. With that, I just wanna again say thank you so much for taking the time to be with us, this morning, afternoon, evening, depending on what part of the planet you're on. But also just as importantly, allowing us to kind of help you better understand how we're thinking about the evolution of these technologies. And, again, we'll talk about this much more in the coming days and weeks, but we do believe that this is a better way for you to progress forward and leverage all the power that you have at your fingertips in a way that'll actually deliver innovation to market more quickly, more cleanly, but also in a nondisruptive way to your organizations. And with that, Jimmy, can I hand the microphone back to you? Absolutely. Thanks, Ed. Thank you very much. I, I know you need to drop at some point, so feel free to to go ahead and do that. Before I hand it over to, I think Stefano is gonna be going first, but both Stefano and Wolfgang, from Rafelson, just kinda set the stage a little bit, myself and members of the development team, much who you probably had a chance to meet last week, when we did our first session. Had a chance to speak with them, I think, early part of the summer and then, another couple of times after that to do some follow-up. And, really what I wanna just kinda stress for that is that we didn't just pick, you know, one or two customers, to do that with. We've actually had probably about a half a dozen or so customers we had a chance to speak with specifically around, you know, what they're looking to head around DevOps, Git, Versus Code, and anything in between, honestly. If you had a chance to attend the session last week and certainly listening to, Ed, where I really hope the takeaway is that unlike, you know, in the past, you know, after, the Serena days we've been, you know, through acquisitions, there's a lot of investment happening for ChangeMin as you can see. You know, having a VP, two VPs, CTO next week being able to attend, it really means a lot. And I I really hope that comes across to the audience here, to our customers, that there's a a huge investment around ChangeMeIn and moving forward. And we don't wanna just have these one ops of having, customer engagements like we did over the summer. We wanna have more of those. There's a number of customers that I've had a chance to reach out to that, just timing wasn't right, but I really hope we can start getting reengaged with a lot of you so that we can do more of these things and not just wait a year. We have them a little bit maybe quarterly or so forth. Much like the old days that you probably used to having user conferences every month or maybe not every month, but I think it's about every quarter. So, so so please keep that in mind, for the rest of this session as well as even into, the next session. With that, I think you're gonna go first, Stefano. I'll kick it over to you. And then, once you're done, if you wanna hand it over to Wolfgang, and then I'll then hand it over to, Yohan when you guys are finished. Yeah. Yeah. Thank you. I'm Stefano Antonacci. I work for Raiffeisen information service or RIS. We are based in Bozen or Bolzano. Hey. We are in a region, where we have two languages, Italian and German. And, this is also one of the richest region in in Italy. And we have, our our company is the IT provider for our owners that are all banks from our region. And we are really, working, on banks issues. So not other insurance or other companies, but just the banks. We have 40 banks. Our banks have more than 2,000 employees, and we are the only service provider for for those banks. So in our IT organization, we are almost, 200 employees. And, we started a journey seven years ago, having a custom software life cycle tool in, in, in risk, and, we started, migrating to to change them. So, really, we are some, late users of change them for some reasons. Even if, I have, more than twenty years experience with change them in other companies. But, since five years I'm in in risk, so I could really, follow all the path from, nothing to what we have today. And, at the beginning, the first effort, of course, was migrating for from something totally custom and not so well, you know, open to to other words different than mainframe. So we started integrating all the stuff, keeping what, was critical, at the same level. So really mainframe like, even with the customization in panels and so on. Typical stuff that we may find in, change month EMF when we have, some something you you have to to keep, all the stuff that was implemented in many years. Of course, even our IT company has more than, forty or fifty years of COBOL MPL one code. So it's not simple to to keep, with the all those software. We have more than 100,000, components in changement, actually. And, at the beginning, like in this picture, we had really two separated, way to version in build and deploy in my frame using changement more and more, you know, clean with the functionality and more and more useful for a typical mainframe developer. And we had the the other word, so they're not mainframe word, starting to use GitLab, Nexus, Jenkins, and on all the modern stuff. So after, completing all the migration in changement, we started a path for modernization. And first of all, we we started using HLL leaks for, moving constraints and automation outside of panels, and the custom stuff and just have, in a in a clean way, and so available also from from API and so on. And then one of the biggest, I know implementation we I've seen in change of mind is the rest API. Really clean, well done, and so really easy to deploy. So adding the rest API to the change one z m f environment, we have been able, in the beginning to start in giving the same, development environment. So the ID, the same environment used by the the new developers also do the let let me say, older developers, but they they both appreciated the the new ID, not only bounded to ISPS, but also with the old modernization. You know, from your code, you have you have so many plug ins, so you can even use already use AI with Cobalt if you want. It's not so so difficult to to implement some plug in in the NVS code. It will be really more difficult in ISPF and so on. And, in the in the same time, we we also, of course, added Zoe because, without Zoe, we weren't able to to communicate from this code to to change one using the ZMF Explorer. But, also, having all this rest API, made us understand that, it will be not really easy, but, however, it would be possible to add all what we need, like testing, like metrics, like security, SonarQube, and so on. So, really, rest APIs is some kind of, an open door for adding many, many other tools and completing the DevOps proxies that, we we we started the really if you think that we started the whole stuff seven years ago, and we started with the the the open and rest API three years ago, we are really doing it really fast. Maybe because we are a small company compared to other international companies. However, we are we we can be really fast in decision and in implementing and taking as soon as possible all what, all the new new stuff that comes, from, from RocketSoft now and change one's EMF and so on. So our latest activities, has been made for, you know, for the core change and make it easier to migrate. So we just adopted the the the more recent skeletal structure that was really a good step ahead of what we have been, ten or twenty years ago. And then we also created the simplified version migration process. Now it's not such a big pain to migrate, a change one version like it was, you know, ten years ago. And then, we also started to change focus from mainframe only, and we got rid of most of panels, got custom customizations and added automation and constraints in HLE leaks. With that setup, all the change of our REST API infrastructures infrastructure so we could, enable new developers, that started using the MF Explorer plugin for your code. We also extended the CMF Explorer for adding some task written in Python because some functionalities were missing, like all it in the packages, promoting, and so on. And then we also started using git z m f, from, Versus Code because, we we found that it's even easier for developers, especially for new developers coming out of university already used to get and, this code. And so they are immediately productive with, even with my frame. So it's really it's really a nice, nice addition for us. And also change among EMF, it's a key point. It's a key tool for us, and the key points for us are that we have always had a top support for problems and. Now I see that, we have also Stephen, and it's a really, a special guy because, the support was really top. And, we we can with rocket software, we can have an open direct communication, so we may speak of our problems, our path, and so we really appreciate it. And the white change, man, white change, man, because it it supports it really supports. It's not just, you know, for for customers, just for, saying some words or some slides. It really supports a path for mainframe modernization without disruption. So we may keep all what, still works even, thirty years after it was, written. So and we we can really make a path for modernization. And, also, ChangeMy keeps a focus on the core mainframe functionalities because we still need that mainframe works exactly, as we expect it to be because, our core business is maintained from, and supported by by mainframe. And for the future, we need even more integrations. So, also, we we are going to add code analysis test frameworks, frameworks, and so on. And so, also, we need more functions from the latest components that works and, maybe still expanded, with more functions like the CMF Explorer plug in for Versus code, the CMF, and so on. We also would like some evolutions in core change on mainframe for features like extra progress of component level, but we are also seeing, last week that in the next version, we will have an HLA leaks for this. So we are really happy about it. We could have some, more feature for version management and for having a monitor for promoted components. There's still work, to do even for this. And so, Wolfgang, I'll let you the mic. Okay. Thank you, Stefano. You already heard something about our modernization process in the, in the last year. So seven years ago, we started with Changemann. Around three or four years ago, we started with a modernization of the mainframe itself. So our management decided to do the modernization. And as you know, I think, every mainframe shop has to do this modernization. As you know, the development environment is maybe the first thing to to be modernized in those days. This is the actual situation we are living in. Okay? So three years ago, we we decided to do the modernization. We did we did a market screening a little bit of our partners, on the, yeah, on the market to see what they have, which development environments, and the motivation or the, the requirements we had at the time, were that we, need a modern user interface that is, yeah, actual. We want the modern we wanted an an interface that is easy to extend, to customize, and we wanted common technologies. So in our shop, we have a sister department that is, that performs the programming of the distributed world, and we wanted that we use common technologies, so technologies that they you already use on on their side. Important for us was also the support. So we didn't want it any, let's say, open source too nice and fancy installed on our on our side. In the end, we decided to go with Versus Code and the whole, Zoe ecosystem. So on the client side, you have the Zoe Explorer and also the Zoe CLI. And in the middle, you have this, Zoe mediation layer, which we have at the moment only the API gateway running, which does for us the, SSO. And in the bottom, you have the whole mainframe, services we can consume from the, in this case, from the Versus code. So the Zoe Explorer gives you the possibility to access the files and, let's say, the SDSF, so all the jobs, the spool. And this was really the first step that the mainframe developers saw that we can have something on on your client, on your workstation. But, obviously, it was not enough because your whole development life cycle, is not just getting some downloading some files from the mainframe. You have to to program really, really in there. So we added other extensions, for example, the d v two developer, which gives you the possibility to, look at the data in the in your d v two instance, but also this has nothing to do maybe today with the, development life cycle. So we added last but not least, DCMF Explorer. So we were really happy to to find in the marketplace, in the Versus Code marketplace already something ready to be to be used. This configuration we rolled out for all our developers. Maybe, how much is all developers? We are 25 mainframe developers at the moment. So all 25 have the possibility to use this this technology. Really quick, we got some feedback from them that, for example, in the in in the initial days, the ZOE explorer had really some options, some limited options. For example, they had some problems with the encoding, with the filtering, with the sorting. Also, the ZOE CLI had some issues with the credentials. And in the end, also, this, CMF explorer had some problems, had some issues with the authentication. With the filtering, you cannot filter, for example, based on the status of your, package. We have some lack of operations. You don't have a promote. You don't have a note it. You don't have unfreeze. With the time we saw that the ZOE explorer enhanced, so they don't have these, problems today. The oh, also, the ZOE CLI didn't, does not have those, problems today. But we saw that the CMF explorer stopped at an uncertain point the development, the enhancement. Therefore, we got in contact with Rocket, now with the Rocket. And, Jimmy, said that we have also a git zmf collector in in the pipeline, and now we use this git zmf collector for pilot users. And, those pilot users are really happy, happier than with the zmf explorer, I have to say. Okay? Therefore, we have for two reasons, we think that the ZMF connector will replace the ZMF Explorer at least for our, reality. The first is we have a common language with the, distributed guys, with the distributed developers. So, I can have an expert on Git and and solve media problems on our side. And the second one is, as, Stefano already mentioned, with the ZMF Explorer, looking at the lack of operations, we had to do some some scripting. So we scripted some Python scripts to use the Zoe CLI and the functionality of the plug in in there to do a promote, to do an audit, to do a freeze, and to do a approve. So this basic functionality, operations that that you need, I mean, in in the, day by day life of the of of changement. And as I said, the second point is we have those, scripts on our side if we have to add one or two scripts too for to to to use the staging and the compiling functionality. It's not it's not a problem for us. Okay. In the end, to complete our development environment, we note also the debugger. We have the IBM debugger running on on our mainframe. So we want the full integration of Versus Code with the whole life cycle, for the developer. This brings me now to our vision. So where we want to be in, let's say, two, three, maybe four years, I think all of you know this, DevOps loop. We now modernized a little bit the code part. So the, Versus Code and Zoe gives us the possibility to, for example, ramp up new people very fast. In the built part, we use, today, changement CMF to do us the older build. We want to extend that and use Jenkins afterwards to, to integrate it with the SonarQube. SonarQube is a tooling that we already use on the distrite distributed side, Jenkins two. On the test part, we want to modernize with the, Zoe possibilities. We get to to call the rest services on the mainframe, test rail, and j unit to modernize that part. Release and deploy once we integrate Jenkins more with a changement, Jenkins because of we're using that on the distributed side and often, we have versions of, let's say, Java versions that has to be released at some time and that a part on the mainframe has to be released too. So this in the future can do as Jenkins with integration of changement. The operate and the, monitoring part of it remain the same. So today, we use Asynca and Grafana and Splunk, and the plan part remain also the same with Jira. At that point, I want to, to close my presentation. Thank you for listening and pass the mic on to Joanne. I will skip those. Jimmy, you're mute. Sorry. Thank you, Wolfgang. Thank you both for for your presentation. There was a couple of questions that did come in, but we can hold them off until, after Johan's presentation. And much like I I explained about, you know, having a chance to, meet with Riferson, same thing as with, Johan and. So we've had a chance to, spend some time together over the last, handful of months, probably right around the the, the summer time frame specifically. And, ultimately, that's what led to this presentation here. And, there's a couple of customizations. I don't wanna take too much of his thunder where he, had a chance to show them to us, and I thought it might be good for the, the. general audience for you guys. So with that, I'll hand over to you, Johan. Well, it took me by surprise by skipping three slides. Okay. Good evening, everybody. So, yes, let me jump into the presentation. So, actually, working at Euroclear, I created one slide just to give you a little idea of the size of the company, the number of transactions, the money that goes around. So, it's a financial market infrastructure. Lots of money comes in, lots of money goes out, and some of the money stays in for all kinds of reasons. So a quick overview of our environment. I'm not it's a very dull presentation, so I did not make a lot of fancy pictures. It's really plain text. So technical presentation was created by a techie. Some of our major customizations, where we are heading with our mainframe modernization, project and what we would like to see in the future and what the biggest milestones will be later on. So we have, one major development LPAR, two preproduction LPARs, two production ones. We don't use any shared DASD. Everything is done with Connect Direct. We have about 10 clones, used for different, types of testing, and about 200 local test environments, which we really stressed the package master to its limits, and that because they're all defined as remote sites. We are a very heavy p l one shop, which actually gives us quite some issues, when we would like to go to, open system tools like SonarQube or even modern IDEs that which I will come to later. Some Cobo, Alcenda and Taylou, big d b two shop within queue. We have web services and, are now explore are extending, opening up the mainframe using Zendesk. What are the four major customizations that we did? Actually, the scratch request was fully redesigned. We have a unique versioning labeling system that we implemented implemented implemented using the change description by the staging versions. We implemented the code review process. As we are in FMI, every component must be reviewed by, a second pair of eyes. And even some of them, the critical ones, need to be analyzed or reviewed by experts. And then a release build process that actually serializes five releases, one upon the other, and creates impact analyzers fully automated and, builds, the full release releases. So how does it look like? For instance, the scratch, package, all the scratches are labeled by here we are using actually the requestors department department field to indicate that this package will contain scratches because they are most of the time deployed at the final, end of the of a release deployment. And you will see that any type of component that even if it's a copybook or any other like p or whatever, actually, it is translated into a like source component, and it goes to the scratch scratch staging procedure. And then it it gets content, and that content is actually, more indicative. And when used within the the build procedures, it will make, a build procedure fail. Due to our ERBs and our release build process, an impact analysis will run. It will add all the components using that component into the release. And if it is indeed still being used because a developer forgot to remove the the include copybook, from this, component or a call to a static routine, within the build, it will fail. So even before anything goes into production, we know upfront, which ones should be adapted or maybe a scratch is being executed, what it shouldn't have been. This is already a sneak preview of our code review process. So even a scratch must be reviewed by somebody else, to make sure that, what is being done, what is being requested to be scratched is allowed to be scratched. If it is a load module, so a static routine will fail during linking. A copybook will fail during building, and then a regular load, so that one will be built. But when you promote it, actually, and it is executed to do it during testing, it is calling an appending routine indicating that this load model is, is requested for scratching. So, normally, during the full life cycle of testing or even building upfront, we always know, if a scratch is allowed or not or if anything else, is being impacted. This is maybe also important to to show. So yeah. And so it's indeed true in an s two. We can see our our components because when before the original utility request, people were always asking, yeah, but where are my scratches and so on. So this is also one of the reasons why we introduced it like this, and it follows the process like any other type of component. It promotes, it audits, and so on. Then the unique versioning labeling system. So how does it work? One, we so we enabled the staging versions, for the components. And every time that you make a change or a checkout or a a checkout from a package or whatever, the labeling mechanism will will make sure that your version of the component itself is uniquely identified. So what you see here is that the component was checked out from the baseline, and that source was originally modified in this package, and that was the version at that moment. If you would do a checkout from a version minus one, you would see here also the minus one with the corresponding package from where it came. Then each time that you modify the component, the number will increment indicating from where it started. As you see over here, actually, the baseline from where you originated is no longer the same because you see an intermediate version. Then on level on number five, a checkout was done from a different package, and, it's this final version that was actually approved on the code review. But probably this one is in line with this baseline version. So during audit, that version three, is actually stored also when you edit. The versions are stored in a d b two table. And during an audit process, we run what we call a retrofit analysis to make sure that you're still in line with the tree that was built. Why do why did we introduce this system fifteen, sixteen years ago when we migrated into Chaizemont? It was an in house built versioning system where they every version of a component was uniquely identified by a member, and this functionality had to be present. So, when we introduced it, we used, a similar exit to it to perform exactly the same. This has now been fully implemented using the HLRX, XML, exits, on check-in and check out. So as you can see, there are a number of, IDs. If you do a recompile of a component, you will get here an RCP. And then the baseline number of the source, staging, it comes outside of Chase Mount. And then also the scratch request also gets a uniquely identified label. Within the load module itself, we will also introduce, so we have adapted a little bit the set as SI statement. It still indicates the package. It still indicates the set as a size, but it also introduces the number of the of the version. The the advantage of this was due to the fact that we have so much test environments, people are promoting from packages to any test environment. And then sometimes it was possible that actually the version from where they originated, so you could have a version four running in test environment one, five in test environment two. If then there was a bug, people could actually see, oh, today, the version has evolved. So we can still do the promote or we can fix based on that version five and introduced in six and then go ahead. So that's why it was also introduced into the into the road manual. The code review process. So I think that any, company that goes through, audit rules, so the the good review process or at least the auditing, is it if it's by an approval process or whatsoever, will be present, and it will be required by external auditors. In the past, this was done in Lotus Notes. So it was an external system where people had to request the the code review of a component. It was a very tedious work. A lot of people were there running reports on ServiceNow and so on. So we, integrated this into ChaseMon using the component activity file. So and every component type that should be reviewed by a second pair of eyes as its own component activity file. So the CR stands for code review and adjusted incremental number. So a source is a CR one. You can have an SRR, which is a static routine. You will have CR two and so on. So through the HLLX, again, in check-in, check out, those two XML serve so the pre and the post XML services, and then dedicated staging procedures, designated user options. So we assigned a number of user options where we are going to store the date when it was reviewed, the date when it was requested to be reviewed, who was the last editor, the version that is being requested, and so on. It's stored in the user options of the code review and the source. And, through the procedures, actually, that consistency is guaranteed. So as soon as somebody edits a component and he saves it, automatically, we're going to incomplete the the related code review component. Then you can request it. Only the editor, the last editor of a component is allowed to request the code review process. An email is sent over to the to the to the to the reviewer who has to review it. He can approve it or he can reject it. When he rejects it, the the the the source component will also be set and forced to incomplete. And so this can be a cycle that goes on. Yeah. For some of the components, we have one user option that indicates if the component is critical or not and if expert reviews are required. And then in one of the user options, the 72, we as we set a number of, TSO IDs, which are the designated, code reviews. On a daily basis, batch chains go over the package master. So what we did is that with the the pmlot, utility, we extract. So we take a copy of the full package master. We store it in the v zumps wherein the XML extractions, and then we send out emails again to, code reviews that are still outstanding. So we are really spamming the reviews to make sure that that process goes smoothly, during the development of Sankey. Now you can imagine during the night if you have an emergency change, yeah, you don't want, an expert to be called in the middle of the night or an a second pair of eyes. So the code review will be removed from the package. The incident package or the emergency package will go into production. The day after, code review package is created with only the code review. The person who did the emergency change will need to assign it, And then the the expert or the code reviewer will, approve it, and then the package is automatically, approved and baseline. So some of the panels. So here you see, the code review component which is present, but it is still incomplete. So then you when you when the editor of the last editor of the component can state this component, he will get this panel as you can see. So it's a regular staging procedure with user options, the version that is being requested to be reviewed. In this case, it's not a critical component, so you he will need to provide himself a user ID that is going to review the components. And in this case, actually, the the change was so here you see that version is going to get an indication, that the the code review was requested. So this is at the source component level. The reviewer rejected the code in this case. So the code the source code becomes incomplete, then it has been edited again. The notification is done again, and then I think I'm not sure if I do it. Yes. And then it was approved. The advantage of having this system and with the different, version labels, so I'm going to go a little bit back in the presentation, is that you can have intermediate approvals. The the the advantage of the intermediate approvals is that due to the versioning system, the reviewer can then also compare the last approved, version against the latest one so that they don't they can have intermediate code reviews going on within codes. Once we have once we had implemented this code review process, what we noticed actually is that the the lease integration process in the past, it was actually at the end because it was so tedious that people said, okay. We are now one week before the the code the the freeze code. So up, we send all the reviews, and then you have people going over the code. And it was really each time it was the same, the code freeze was too late. So now we are every time quite on time, and we have also noticed a big quality boost in the delivery of the code simply because of those intermediate code reviews and because of the fact that people can request it integrated into the IDE, actually. However, in the past, it was externalized. So our mainframe modernization, actually, when I looked at the previous presentation, to be quite frankly, it is actually a little bit similar. Although that ten years ago, we already did a study, within IDZ, which was actually, from a functional perspective with the client pack was actually what we were looking for. But, unfortunately, due to the language that we use being p l one, the editor does not support, the micro language which is present in p l one. As we have a very heavy p l one, so we have over 50,000,000 lines of code, two fifty developers, we were unable to progress with the client pack and, the IDZ implementation. So it stopped. But then two years ago, we said, okay, we have we have to go and do a a migration of a shift right left, as every every company is doing it. So we we we did this in a couple of phases. First, what we did is that we are removed the technical debt. So we removed almost 30% of our code base. So as I said, we had 13,000,000 lines of code. You can imagine, when you remove 30% of the code and you have a release processing, rebuilding codes, we launched we have 12 releases, a year, of which four major ones and then a couple of intermediate ones. If you modify copybooks and static routines, you can imagine the time consuming on building and also analyzing by developers. Latest critical project that is ongoing is going to save almost about 900 k, euro. So it's tremendous gain only by cleaning. We aligned a lot of code onto the existing coding guidelines, and then we started to untangling the monolith. And so it's a big shock. It's a big legacy of forty, fifty years of code. And we divided it simply in three major projects, and the business code, the data objects, and then anything that is launched by, the mainstream integration people, GCL, include members, procedures, and so on. It was divided based on the CMD piece of the configuration configuration management database where there is codes by business domain and subdomains. So it was split in about 50 applications. Packages are then grouped by type of code instead of having applications by code. Now we have packages by type of, of component. And so we have still business code. We still have DPA so database code. We have production code, but they are still now everything is grouped together by application. In the long run, it will give us the ability when the code is really refactored maybe to move it out to an application over into the cloud. And then we started introducing modern development capabilities, meaning a component can be private, protected, or public, use. So you can say that my copybook only can be used by my application. It can be protected, meaning it's within my business domain. Where it's public, it can be used by anybody within the different, applications, within ChangeMall. So, this is these are the packages by object type. So what did we do? We used now the request of, phone feed that we translated into material. So these are the standard applications or packages for the business code, and this is the the database components. Yeah. Now for the database components, in the past, it was easy. You had a specific application, so it had its own approval process. When you created the plant package for the data components, there were about eight or nine approvals, while the business code only had four. So this was a problem because we had only one application, and we have only one planned approval list. So what did we do? We were a little bit ahead, so we moved the, the assembler exit as that knowledge is going away, and I'm not even a fan user myself from a sender. So we said, okay. We are going to, in the freeze exit, we are going to simply remove the full list, of that package, and we recreate a fully new one, which is compatible with the one that we had, in the original application. When you revert the package, k, fine. You will get the the previous approval list again. But once you freeze it again, we move the the approvals, again. We we move them away and we reintroduce them. So, actually, you could although that in the new release, we have the new, HLLX, giving you the ability to add and to remove, approvals. In a way today you could already do it in this exit. But probably to align us with the standard flow of the tool once we have the exit so that logic of the code will also move to the related, HLLX. So the introduction of private protected and public. So this was actually the biggest change. And from a architectural point of view within the baselines, I must be honest, it took me a while to untangle it, how we were going to introduce it. So for instance, a copybook today is no longer a real copybook for us. It has become a source. And the developer will indicate if that copybook is going to be used privately, protected, or publicly. The same for a static routine or, for a DLL in p l one. You can say that the DLL can only be used internally in your application or within the the business domain or, publicly. And that staging procedure, depending on the user option indicating if it's private, protected, or public, is going to create the real copybook. And just as an idea, instead of a CPY, it's going to create a CPO or a CPI, an iPhone private and orphan protected. And this architecture is then being, created, so it it was a big change. You have to imagine that you have to first migrate your components out of one application into the other, migrating it from a copybook to a source and so on. And, we had one we had, one big domain that actually, was the the pilot. And, it went so smoothly without any breaking whatsoever, that actually, once it was launched, everybody wanted to migrate immediately into the architecture because suddenly, people saw the advantage being having their own components within their own domain and not anybody could could modify the codes. And before, we once had a big problem, that somebody modified the table, and it was even not the owner of the table. So now we can really indicate this is the ownership of the components. This has fully been implemented for the GPS components, and, we have started with the data components for our very early pilot, and it also went quite smoothly. So as of, next year, so January, after the January release, we're going to start moving all the data components from all the other business domains also into that architecture. Now the future. I only have one slide here, but in a way, the slides that, were presented by, I'm truly so, Stefano and Wolfgang, are almost similar, in that way that, with Versus codes. Unfortunately, we lost a lot of time because two years ago, it was requested to do a test, but it had to be tested on Citrix. It took me six, seven months together with the Citrix team to try it out, but, in vain. So now we are doing it on VDI's. Our first pilot team, will start as we speak. Their VDIs are being set up. The REST API exploration is progressing. So we have a young developer, that is doing Python, development to execute actually the YAML file. So, as as I've shown you, we have a very, very specific way of creating our syslibs, and the YAML file is going to help us. Then we did the very first, design to move ISPF applications to the web, using, Java servlets, and the rest, API. The next step will be that we really would like so now we have ServiceNow where the full launch scenario is designated, but still it's PMC that then triggers the launch, from within ChangeMOM itself. With the REST APIs and Ansible, we will do that from in ServiceNow where PMC will trigger it and that, the automation will start from there. We have been requested to foresee support for Java on zedos. And I'm really eager to get in touch with, Stefano and Wolfram to be quite frankly because in our shop, we are going to start investigating how to map the git structure, on ZMF packages. Do I have another slide or not? No. One final thing here is indeed that the Versus Code integration as we speak, especially as I had so much knowledge on IDZ, the Versus Code for us is way too limited. And, honestly, we are not a software shop, so, we can probably automate quite a lot using all kinds of tools to do the same, promote and approve and audit and create packages and so on. But to my opinion, this should definitely be in the the Versus Code plug in extension so that for the that we can shift all the developers into Versus Code and have the same functionality, that they have on the mainframe. I would like to thank you for your, for the opportunity to present this. If anybody thinks something is interesting, they can always contact me afterwards offline. I'm sure that Ginny, can send my code and it's, to you all. Thank you. Johan, thank you very much for your presentation. Yeah. And and for anyone that's probably looking to get a hold of Johan, I think the same thing probably also works for Stefano and Wolfgang. They're both very active on our community page. You can always find them there as well. A lot of times they're answering other people's, posts or, important feedback and so forth. A few questions came up. I'm waiting for clarification on one of them. It's going to, Wolfgang's part of the presentation, but at least for you, Stefano. One of the questions that came up is, are you willing to share the list of the Versus Code, customizations that you did? And if you are, I just might, Yeah. stress maybe the best place might be the community forum, but I'll I'll let you answer. that. Yeah. Of course. But it's quite easy because, it's just the Python tasks written for for Versus code. So all what, is expounded by the rest API, we have done it. As I said, it's really simple to interface with the rest API. So for promoting, it's just a matter of, getting all the parameters we need, and then we we just, run the the the Python script as a Versus code run task. You know? You just define that, as a run task. You have a list of names, promote the audit and so on. And so we we somehow we covered the most, of of the functionality. We just didn't cover the the the freeze approved because it's a matter of production too. And so we have some other customization that we didn't want to explain this moment to this code. However, it's just the typical sequence of actions you have to do for getting the the the package ready for ready for approval, ready for production. So, basically, promote the audit and and so on. Nothing more. Thank you. And as I said, I am I'm waiting for a little bit more clarification on one of the questions, but if we don't get a chance to finish up here, I have time for it, you know, I'll make sure that I get the follow-up and then share that with, both yourself, Stefano and Wolfgang, in a later time. One of the questions that came up, for you, Johan, is around, I'll just read it. So one part is the early part of the comment, your presentation during the scratch piece. Can we have a scratch, for both, the regular baseline components in the same package is, how the question was. I don't know if you can answer that one. Well, that's up to you. So in our case, in the very early beginning of this of that process, they were mingled together. You could have a scratch and and deployment of the same code. But then sometimes we saw that people after we have it should it still has to be backed out for whatever reason. But due to the fact that the changes and the scratches were together, this was technically not possible. And then you had to create an emergency package, get the source back from the archive, go through the full deployment process. That's when we decided to split it. And, deployable code and scraps codes are segregated, but that's simply a way of organizing it. So, technically, it was possible. Thank you. And then another question for you, Johan, is the is the code review process available for unplanned and temp packages? And packages are not allowed at Euroclear. We don't use them. So but that's, again, that's how you would like to implement it. And for unplanned packages, that's, something that we do because you can be cold during the night and you have to run something in emergency. But the only thing is, like I explained, we removed the code review, components. So the the component activity file is deleted from the package when you are doing a checkout, into an emergency package. This is done via HLLX. And then the package can be launched. And, during the once that start the task, the day after, twenty four hours later, there is a batching running every day that investigates the emergency packages, and then it will create what we are calling a synchronization package. So it's the the department field will be SYN, and, material field, will be CRV, so code review. And then it's the the one that created the incident will need to, request the review of that code review in the unplanned package, and then it will be, will be approved. And there is a follow-up process also via the via our service now, within the company. So but it's, that's how we do it. Okay. Great. Thanks, Johan. And just one other question that came up, regarding, you know, more information around the get connected, possibly another future, you know, webinar or whatnot. Bob has done a couple presentations already. That's Bob Yates. I'll look for some of the, the older webinars and see if we can get them reposed. I'll probably put them into the, community farm. I will say that for next week's session, that's probably a place where we're going to have a lot more discussion. We're not necessarily going to do any demos around that, specifically around, to get connected, but, certainly, that's gonna be compliment in the conversation. And so to give a preview of what we're looking to do over the next twelve to fifteen months as it relates to both Versus Code and the, and the Git Connector. With that, I can't thank you enough, Stefano, Wolfgang, and Johan, for your presentations. As you can see, there's some people have come in and said, they were very well received and and, very well liked. I hope we can do these more often. We're not putting too much words into, Stefano, Wolfgang, and Johan. I will say that they're very good to work with. I've had a longer history with Johan, but certainly more recently with, both Wolfgang and Stefano that, you know, probably these things are to try and help networking, trying to get, you know, presenters like this engaged with other customers, like minded. And so if you have questions, please use the community forum. Those three and others, there's many of you on here that, always are very active on the community forum. And, and, otherwise, you can reach out to us, and we'd be more than happy to, you know, get you guys connected where it makes sense. But, any final words, Stefan, Wolfgang and Yohan, before we, we end this? I think you guys are muted. But with that, as you can see, there is, the survey. I'm trying to keep this open just a little bit to make sure that everyone has a chance to, to, you know, go through the survey. I think it's only about thirty or forty five seconds. But, again, thanks everyone for your time this afternoon. Johan, Wolfgang, Stefano, thank you both and, as well, as Johan. thank you, Jimmy. Thank you. all. Thank you. Thanks, Bye. bye. Bye now.