In this episode, Walter speaks with Cliff Bradley, President and CEO of TechAFFinity Consulting. They discuss Mainframe Modernization at Meredith Corporation.
Walter:
Hi, everyone. Welcome to the latest edition of Walter's world. The podcast series. My name is Walter sweat, and I'm the CTO here at Astadia. Astadia focuses on mainframe migration and modernization projects. And today I'm delighted to have with me cliff Bradley, who's the president and CEO of tech affinity consulting, um, tech affinity, and state of your work together on a very successful migration project. And I just thought it might be informative for everyone to hear the background on how this project came about and what went into it, uh, so that you could see if there are benefits for you to consider a similar type project. Cliff. Welcome very much to Walter's World. Thank you for having me or delighted to have you here. Uh, cliff, can you give us a little background on, you know, what you've done in the industry and how you came to be with tech affinity consulting?
Cliff:
Well, I've been in, uh, you know, the IT world for, you know, 30 years as most mainframe guys are. Uh, I started my career at a GT data services, which is actually now Verizon data services in Tampa, Florida, uh, and spent about eight or 10 years with Verizon or GTE after they changed names. And, uh, you know, and started tech affinity consulting in 2000. Uh, our primary focus at that time was mainframe technologies. And one of our biggest clients was time Inc uh, timing owning, you know, most of the major publications in the USB in time and sports illustrated and people, and, you know, they had 58 magazines and, uh, you know, we kind of, uh, you know, cut our teeth in the publishing industry with, uh, our timing client and built a lot of applications over the years for them.
Walter:
So the project that we worked on together, how did that transition from the time Inc to this particular project, what was really involved in it? What does the system do and what was the history?
Cliff:
Yeah, I'll give you a little bit of the history of the actual application itself. You know, we call it the pre-sort system. So back in the late nineties, uh, timing, uh, you know, like I said, they had 58 magazines that our largest publisher in the world, they had magazines all over Europe and Asia. Um, and, uh, you know, we were hired as consultants in the late nineties to help them build a customized, you know, what we call USBs presort system and that presort systems responsible mainly for putting magazines and delivery sequence and creating bundles and pallets, and the logistics planning of getting magazines, you know, from the printers, you know, to the mailbox, uh, you know, timing was the largest, or was the largest, uh, publisher of weekly magazines. They had people magazine sports illustrated, time, time for kids, uh, and entertainment weekly. Uh, there's really not a lot of other weekly magazines in the industry and there's business week in the week and a couple others, but, you know, time, you know, had the majority share of magazines and they're totally different than a monthly magazine, a monthly magazine.
Cliff:
It doesn't, you know, the, the delivery time is really not that important. Uh, you know, but for a weekly magazine, most of the subscribers want their magazines in their mailbox every Friday afternoon. And when you're trying to deliver, you know, 20 million magazines a week and get them to match to people's mailbox on Friday afternoons or Saturday at the latest, uh, you need to have a pretty dynamic system that had a lot of transportation planning, you know, um, you know, and these magazines print throughout the country, uh, they don't print at one place. They print in California and New York and Wisconsin and Georgia. Uh, so we came in in the late nineties and help them build this, you know, uh, presort system that was really focused on the weekly magazines. Now, 98, 97 or 98, 98, 2017, and 2018 Meredith corporation. I de Moines, Iowa, who has magazines such as better homes and gardens.
Cliff:
They bought timing, timing, and they don't really, they didn't have a pre-sort system. They don't have mainframe technologies and they had their own systems that took care of magazines with the exception of weekly magazines. Uh, and they didn't want to continue to pay for the mainframe, uh, that timey had because it was on a huge, you know, IBM, Z cloud mainframe and the monthly cost was, you know, outrageous. I mean, I can only imagine, yeah, it was 50,000 or 50,000 a month or whatever, or actually it was, you know, I think it was around $50 million a year, uh, for their processing. Uh, so they wanted to keep this pre-sort application, but they didn't want everything else. So that's kinda how it started. They wanted to keep the magnet, the pre-sort system for the weekly titles, but they didn't want all the other systems at timey Academy.
Cliff:
So that kind of gives you a background of the system itself and kind of what it does. The other part of it is, is there actually is a, another system that didn't run on the mainframe called freight planning. Uh, and it was written, uh, and ran on on-premise servers in New York. And it's a, you know, it was a PowerBuilder application with a site-based database and it's highly integrated with the presort system and is in a sense that it does the truck planning, the routes, the bill of lading. And, uh, it does all the logistics planning for the magazine. So there's actually two systems that they wanted to keep. One was on the mainframe and one was on on-prem servers.
Walter:
Okay. Super, thank you. And I can only imagine the logistical nightmare of trying to keep all of that information current and accurate and timely. Uh, so making sure that it stayed that way had to be a paramount importance I'm sure. Oh, absolutely. Absolutely. Um, can you give me an idea of the size of the application, you know, numbers of programs and tables or rough ideas of lines of code, just so the folks who are listening would have a, a feel for that size?
Cliff:
Oh, absolutely. Absolutely. So, as I mentioned, it ran actually it ran on, uh, originally, you know, the timing, uh, L pars were around 3,500 MIP machines. And the first thing we did is we actually took the code off of these, you know, larger LPRs and put it onto a small little part by itself. I, and in doing so we determined that it really required like 300 MIPS. It had about 425 programs, 160 to 170 DB, two tables, 400,000 lines of code and about 1800 JCL members. So that's kind of the size of the system and what I mean by JACL members, that's 1800 jobs. So it was a fairly, you know, fairly large system. Um, but it was contained within a very, very large, you know, a system-wide L par that had a lot of other applications, uh, and we extracted it out of that system and put it onto a smaller L par that was our first step. Then our second step would be to move it over to Microfocus.
Walter:
Did that make sense? Um, and I'm assuming that they probably didn't consider leaving it on the mainframe just because of cost. Would that be right?
Cliff:
That's right. So when we moved into the smaller L par, uh, which is a 300 mil LPR that only contained this presort, uh, when you look at cost of licensing of Syncsort endeavor, and a lot of other things, our monthly platform costs for running $85,000 a month. We ran on that at L par for one year and not closed nine months to a year while we were migrating. So, uh, $85,000 a month was just for the presort, uh, system when we excluded all the other, you know, timing applications,
Walter:
But that's still very substantial without a doubt.
Cliff:
Oh, it's over a million dollars a year just for platform costs, not including resources or a lot of other things that takes to support a system.
Walter:
Wow. Yeah. When did the project actually start cliff?
Cliff:
So, uh, we, we started the project on December 5th, uh, 2018. Uh, we had a hard end date of September of 2019. I mean, by that is that if we didn't finish it by September, 2019, we were going to have to sign up for another year, you know, on that 300 mil L par at $85,000 a month. No pressure there. Huh. Got no pressure, you know, and Meredith was like, we're not going to pay another million dollars to keep the system. You got to get it completed. And off that main for off the mainframe by September, 2019 last year. And did you make it? We did. We went alive, uh, August 18th. Uh, we had two weeks to spare. We ran in parallel, you know, we ran a parallel for the first two weeks of August, and then we did not run on the mainframe for the last two weeks in August, but we had it there as a, as a plan B in the event that we needed to roll back, but we didn't, everything went smooth and we've met our deadlines for our client.
Walter:
That's fantastic. So if you talked about your mainframe background of had you actually had experience working in migration projects prior to that?
Cliff:
No, not at all. I mean, uh, you know, and neither are our resources. I had worked in micro-focus back in the early nineties when I was at GT data service. So I have some exposure to it, but it's changed a lot in the past 20 years or 25 years, you know? So, uh, my current experience with it was nil to none, uh, and you know, my resources that worked on the projects, they were traditional mainframe resources that grew up on a mainframe and really, you know, pretty much only did mainframe activity. So we had zero to no experience.
Walter:
Well, I know some people consider that a leap of faith. Um, I would hope that you feel confident that you know, enough of the mainframe types of activities, the fact that you're working with batch, that you're working with the same kinds of data files, that it wasn't completely different and that it gave you some level of confidence, right?
Cliff:
No, I mean, you know, I mean, what, what happened is, is, you know, we did to do our own little research and try to start learning micro-focus. But as you know, I mean, we partnered with our stadium because of that lack of experience, uh, and you know, that confidence level that we really needed to commit to that client, that we could get this thing done in nine months. And we, you know, we had a couple of weeks training with some of this, uh, stadia staff members that taught us how to manipulate files and how to compile programs and how to map a copy book and how to run a job. Uh, but what we found is, is, you know, after, you know, just some short training periods of a week or two, and getting in there, I mean, COBOL is cobalt JCL is JCL database access to database access.
Cliff:
And, you know, and within, you know, a month or six weeks, our resources were off and running. And, uh, you know, we did lean on the stadia folks for, you know, certain types of skill sets and things that we didn't understand and know how to do. Uh, but the resources, you know, they, they, uh, transitioned into the micro-focus environment very well, uh, including myself. I mean, you know, um, you know, president of the company, but I helped design this system back in the nineties. So I know a lot about it. I still know how to code. I try not to code that much, but I also went through the training and I get an understanding of how to, you know, manipulate Microfocus and how to run it on, on that platform. And, uh, you know, I found it very easy to learn and very, very easy to transition with the skills that we had on the mainframe.
Walter:
Yeah. Having started back in the dark ages on the mainframe, like I did, uh, I contend that it's a whole lot easier for an experience mainframe person to pick up how to use, how to compile a cobalt program with visual studio or eclipse more so than it is for someone off the street trying to learn TSO and ISP SPF with all the manuals that go along with that. I don't miss those days.
Cliff:
Yeah. I find it a lot easier to compile and run a program at about Oh four seconds of micro-focus versus, you know, five or eight minutes, you know, on the mainframe. Yeah, absolutely. Absolutely. And we use the eclipse environment. Yeah. So the eclipse, the ones we use and we find it very intuitive, uh, and we able to compile the program updated and find errors are very easy, even from a dev ops perspective,
Walter:
Being able to, if it's important, being able to promote code and do automated testing with different tools that are out there. I just find that to be such a better environment quicker and more reliable. Uh, I'm glad to hear that. It sounds like your experience was the same.
Cliff:
Yes. Certainly. As
Walter:
You mentioned time a second ago, about time for compilation, what about performance? How well has the application performed now that it's not running on the main brain and is running in this new environment?
Cliff:
Yeah. And that's a very good question. And me being, you know, a traditional mainframe guy, you know, for 30 years, you know, I was, I wouldn't say pessimistic, but I wasn't real optimistic that the system would perform as well as it does on a 3000 bill LPR. Um, you know, it on the, on the LPR, it had about six terabytes of data and it is a sort routine. This system was called presort. So it has a lot of sort routines and a lot of big files that you're sorting 4 million records and things. And to give you an idea of what it ran on the mainframe, uh, on the mainframe, we would start around six o'clock in the afternoon because of load balancing and the heads kicks environments that were up during the day. And we couldn't run batch during the day and stuff like that.
Cliff:
And they always prefer to run at night. Uh, but for like people magazine we'd started six, uh, six o'clock. I threw the scheduler through TWS and our system would generally complete in about 12 hours, you know, so six in the morning, our deliverables are due out to the plant by eight. Uh, you know, if we ran into a problem or two, we might miss our deliverable at eight, but in general it would take 12 hours for our people our time, which are the two biggest magazines. Uh, when we got onto the Microfocus environment, the first time we ran it, it ran in 17 hours and I was like, this is never going to work. Uh, but then we found some tools working with the stadia and some other team members, you know, profile or options and some things that we can use to identify, you know, where's the time being spent in these programs.
Cliff:
And I didn't mention is that we were running DB two on the mainframe. And when we, uh, moved it over to micro-focus, we chose a war, a Postgres as the database. Um, and what we found is that, you know, the access paths and things within, uh, Aurora Postgres were different than DB two. And we had to kind of tune the database a little bit, but long story short, once we went through that, uh, performance tuning process, we're now running in five to six hours. Wow. Which is, you know, uh, a huge improvement. And we're starting in our system, I don't know, three or four in the afternoon now, cause we don't have load balancing concerns with kicks. Uh, we're finishing at eight o'clock in the afternoon, in the evening, maybe nine o'clock. Uh, and we all go to bed knowing that everything's done for the night.
Cliff:
We don't have to worry about having an offsite shift, uh, off shore shift, excuse me. So, you know, I do have an organization in Chennai and Bangalore in India, uh, and you know, we had two, two shifts supporting this application when it was running on the mainframe because it ran long, it would start at six o'clock and we'd support it so about midnight. And then they'd come in the office in India and take over and make sure the deliverables were met by eight. But now we only require one shift because they'd run it in five or six hours and it's done before we go to bed in the evenings and we know whether or not, you know, everything's completed. And then, uh, it's all good for the next morning. So it's been a huge performance improvement, which we were delighted to see.
Walter:
Well, I'm hoping with that performance improvement, you saw the applicable and appropriate amount of cost savings as well.
Cliff:
Yeah. So, uh, you know, in addition to the performance improvement, um, you know, what we find now is that our platform costs, which includes the Microfocus licensing fees and the AWS costs are run at about $8,000 a month. Uh, if you compare that to what we originally were doing on that smaller L par of 85,000, uh, we're saving, you know, $900,924,000 a year, uh, for the system platform costs. That is so it's, it's amazing 90% savings
Walter:
That is truly astounding, uh, 90% savings with even better performance. Uh, that's quite a combination
Cliff:
Quite accommodation. And, you know, and, and honestly that cost savings is what saved this application. Uh, you know, I mean, there's, there are some, a lot of savings, you know, with the system with, you know, postage costs and things like that, but Meredith was not willing to pay, uh, you know, the amount of money it was costing running on domain Frank to keep it, they would have found other methods. So, and allowing us to, you know, realize the savings on Microfocus saved the system.
Walter:
That's exciting, truly exciting. And obviously that success didn't come without challenges. Could you kind of share with the audience, some of the things that you ran into that maybe on day one, you didn't necessarily expect, but that you were able to find solutions for?
Cliff:
Yeah, so, uh, you know, and technically, I mean, we thought everything would just move over and we'd compile everything and, well, that's what we kind of thought initially. And then we started, uh, you know, testing things and running and then realizing that, Oh, there's some things that we didn't think about that we learned. You know, you know, one of them is, is that we had problems in the database and that performance that I mentioned earlier, and some of it was because, um, in the mainframe and Coldwell DB two, you know, when a program is it disconnects from the database that automatically disconnects that thread. Uh, but we found in the Postgres environment, it does it. So we were building up all these open threads, which was calming causing performance degregation, uh, you know, so we had to go into all the programs and just do an explicit disconnect from the database when, uh, prob pump prob excuse me, upon program completion.
Cliff:
Um, in addition, access pass different, uh, the, you know, the way that Postgres access to the data within the databases, we needed to fine tune some indexes within the native databases, uh, and this a highly partitioned database. So we have a lot of different partitions for each magazine that way we can run one magazine without affecting the other. Uh, and then the other thing that we noticed is is that, you know, we selected an ASCII environment versus an episodic and in Microfocus you could select either, but we moved to an ASCA environment and, you know, we had to be very careful and diligent about, you know, when we're sorting the files based on, you know, both alpha and numeric data, they sort differently and asking that's episodic, but in general, you know, we overcome overcame those challenges and, you know, and, uh, the system is working.
Walter:
That's super, that is super, um, talking about the mainframe and the, the beauty of it, you know, that it just runs all the time and you can depend upon it all the time. Part of that component relates to disaster recovery. Um, what is, what is the difference been from mainframe disaster recovery to running in this new environment for you?
Cliff:
And I actually, this was probably one of the biggest delights that we have experienced in this, uh, in this transition. So those that have worked on a mainframe, you know, for 30 years, know that a disaster recovery test is a pretty big activity. You know, we, when these systems were running under the timing umbrella, so to speak, you know, we'd have a 48 hour window, uh, where we would do disaster recovery. We'd have, you know, 15 or 20, uh, system programmers bring up the systems from the backups for the tapes, uh, bring up DB to load all the BB tube. Uh, in general, we'd have about, you know, four to eight hours at the end of the 48 hour window after they recovered the whole system to try to test our application. And in a 20 year period of supporting this application, we probably success a hundred percent successfully tested the application two or three times in 20 years, uh, because at the end, we're at the very end of the system and we just run out of time. So we've done two Dr. Tests since we moved over to Microfocus and they took a total of about three hours with two people, uh, one of the AWS administrators, and then one of my resources, you know, we flip over to the backups. Uh, we pick up and run, we run those for, you know, for a couple hours and then we flipped back. So the, uh, performance, excuse me, the Dr. Testing has been just outstanding.
Walter:
That's, that's exciting to hear. And just from the confidence level of knowing that you can do that on a more scheduled basis and ensure that you have time to do it, uh, that's got to be a real positive for you. Absolutely absolutely. Took them back to your team. Um, I'm curious, you know, all of the mainframe, there, there are a wealth of tools that are out there and available. Did you find it easy or hard to kind of not necessarily replace, but find solutions for doing things like debugging and using tools like file eight or expediter? What has that experience been like for you and your team?
Cliff:
Yeah, I mean, you know, are equivalent tools in the micro-focus environment. And actually, I think there are better tools, you know, now that we've used them for over a year now, you know, for performance tool and you have, you know, the profile or option within micro-focus, that tells you, you know, what percentage of the time is being spent in each paragraph for the program. Uh, and that's a really good tool for performance tuning. You can look at it and say, Oh, 90% of the time it's been in this particular paragraph in this program, and you can isolate it to that and figure out where is it in that, in that paragraph. Uh, so that's what we use for, um, you know, for performance tuning, for the most part, you know, there's also the data file editor, which is just like file aid where you can Mac Mac, copy books and mat files, and do look at the layouts of them.
Cliff:
Uh, you know, and it's extremely similar to file a, but it's more visual and you can jump around records a lot more. You can change data a lot easier. Uh, you know, then there's also the debugging tool that, you know, has, you know, similar things is that they have on the debugging tools on the mainframe. And actually it's a lot easier to use putting in your stops and your breaks, and you can visually see everything in it. Um, so we didn't find any deficiencies in the microphone. Microphone was toolset that we had on the mainframe.
Walter:
Oh, that's exciting to hear. Um, can you give us a rule we've idea of, you know, what it was like going from DB two to Aurora as you are, excuse me, Postgres, as you mentioned that earlier.
Cliff:
Yeah. So, you know, from, let's talk about data conversion first, you know, so, you know, like I said, we had probably six terabytes of data, uh, in DB two on the mainframe. We only needed about two terabytes because we didn't bring all the history with us and things like that. We found that as an opportunity to say, all right, let's just get rid of all this old data that's no longer needed, but we had about two terabytes to bring over. Um, and there aren't any tools that will jump directly, you know, from DB two to Postgres. So what we did with the help of a stadia is we set up a DB two LUW, you know, um, you know, kind of a hot, so we converted the data from DB two to DB two LUW, and then there were tools that we converted from DB to LUW into Postgres.
Cliff:
So that's kind of how we handle the data conversion. Uh, it went well, like I said, we had 162 tables about two terabytes that we, uh, converted in there and everything went extremely well in that process. Now, as far as my team, you know, um, learning how to use PostGrest, uh, versus DB two, you know, we find that the SQL statements are exactly the same. So if you want to select star from table or group by or some by, uh, you know, so anybody that has any type of database SQL, uh, skills can usually, you know, it was easily transferable into PostGrest. Uh, there were, you know, how do you access it? You know, the tools that you access that we're using PG admin and to access the Postgres database. Uh, so yeah, we had a little bit of a learning curve to learn, you know, how do you write a query? How do you execute a query? Uh, how do you load a database and how do you unload a database? Things like that. Uh, but it's much more visual, you know, it's not just, you know, a green screen on the mainframe, you know, uh, you know, you've got clicking, clicking submit and things like that. And, uh, you know, it was very easy to learn.
Walter:
I don't know if you would agree, but the extensibility of the tools that are available, uh, whether it's database access or for debugging extending eclipse, those things, I have just found that to make such a drastic difference to, to what I was used to when I first started my main frame career. And it sounds like you've experienced some of the same,
Cliff:
I would agree. I think that it actually has reduced our development cycle, even though the system is in a maintenance mode. Um, you know, we just recently in the last four weeks, we brought in the economist, uh, which is a non Meredith weekly magazine, um, you know, to pre-sort and do their distribution and, uh, their freight planning, you know, and we had some development to do for them. They had, you know, special characteristics and special logic that needed to put in for Latin America that we didn't have. Uh, they had some hand delivery copies of the economists that go to, uh, senators and things in Washington, DC that needed separate labels and new new types of labels not delivered to the post office. Um, you know, and our development cycles, I think were significantly reduced in this environment. So we still do. We still have development activities going on. And I think, you know, I guys are building it quicker, testing it quicker in this environment that they would on a mainframe,
Walter:
Well, exciting times, cliff, like we were at the, almost the end of our podcast time. I just wanted to thank you again for coming on with us today is such a fascinating success story. And I love every time I get to hear you talk about it. Thank you. And if anyone wanted to reach out to you to ask any questions about your experiences or to learn more about what you do at tech affinity consulting, how could they best reach you?
Cliff:
Well, one is like looking@ourwebsiteattechaffinityconsulting.com. Uh, the second option is to send an email to me directly. I'd love to hear from you@bradleycbradleycattafconsulting.com. That's Tom Apple, Frank Frank consulting.com. And I'd love to hear from you
Walter:
Again, cliff. Thank you so very much for taking the time. I really do appreciate it. Thank you. Okay. And for everyone in the audience, thank you for taking the time to join us again today. Um, if you need to reach out to a stadia, uh, www dot, uh, stadia.com, uh, please keep a lookout for our upcoming podcast. It's always a pleasure. Thank you so much. Thank
Get in touch with our experts and find out how Astadia's range of tools and experience can support your team.
contact us now
At Astadia, we build powerful software that helps enterprises and government institutions accelerate their digital transformation, enabling them to grow, scale, and stay on top of the competition.
Follow us on socials
Copyright © 2024 Astadia Inc. All Rights Reserved