Earlier I wrote a series of articles discussing what happens when we connect to a website, and I used an analogy of the post office to describe how messages get routed. I talked a little bit about what goes on at the business side of things, and now I want to go into much greater detail.
Let's say you want to shop online, or transfer funds, or any of the zillion things we now do over the internet.
You open a browser on your phone, tablet, laptop or desktop and connect to a URL. In my previous series of posts I described how this gets translated into a series of messages that get routed to the 'front office' of a business, which then sends your information on to their fulfillment center or distribution center for processing.
There's a bit more to it than that. You see, the business will have one machine (or building) that responds with the webpage you requested, but in order to fulfill your request it needs to know a few things. Like your login info, and whether you're authenticated as the person authorized to view your account info. Then it needs to find your particular information (out of all the other people who have accounts there) and let you see yours, and yours alone. Plus there has to be a method for adding new customers, or removing old ones, and getting your billing information, and more.
So one machine may be dedicated to offering up the requested web page, and another machine may handle authenticating login information, and still another machine may hold the database with all your order history or transaction history, and still another may be secured more tightly because it holds everyone's billing information, and so on and so forth.
But wait, there's more!
If the business is reasonably large, it may have thousands or millions of people interacting with their websites on any given day. So they need a way of handling all that traffic. PLUS, people get pretty upset when a service isn't available. If they want to order something, or pay a bill, or whatever they want to do it Right. Now. and they aren't going to be pleased if your website crashes.
So businesses need redundancy, because you as an individual may survive if your hard drive crashes, but a business might not. So there are ways of having two or three machines acting like one web server, so that if one fails the other two can pick up the slack. And there are things called 'load balancers', that help make sure that all that traffic gets routed to the servers in an even manner. Otherwise, one server might get so overwhelmed that responds to requests more slowly, while another is sitting idly by.
But if you have two or three machines doing the same thing, you also have to make sure they're synchronized and share the same data sources. So instead of having a hard drive on one machine, you'll probably store everything a shared Storage Area Network (SAN), which will also have built in redundancy so that if one of the drives fails the data can still be recovered.
Oh, and you also have to worry about making sure that transactions happen once per request, and only once. That is, if something crashes while a request is being made you have to make sure that the request finishes processing (or doesn't process at all, undoing anything done before the failure). That way you don't get charged twice for the same order or something.
All of which sounds like a lot, when you think about it. But businesses are aware of all this and most of them have figured out how to make it happen (even if that sometimes involves outsourcing services, like using a cloud provider to manage your machines.)
Anyways. Three years ago if you had asked me what an application was, I'd probably have said it was something like Word for Windows, or Pokemon Go. You download some sort of file (most likely an .exe file, since it's an executable), install it, and it does stuff.
And learning to code meant I was much more aware of how complicated creating that .exe was. I mean, the Windows operating system has something like 50 million lines of code. Trying to figure out how all that fits together would be insane.
I knew that there had to be a way to allow multiple people to work together on a program that large, and I'd heard about things like GitHub, and understand the importance of version control. After all, I had the joy of trying to figure out why a change in one part of my program broke something in another part, and that was all just me. Trying to manage the efforts of five or ten different people all working on different parts of the program at the same time? That requires a good supporting structure in terms of tools (like GitHub), division of labor (who is responsible for which parts of the program), and procedures for deciding when something gets accepted into the official program.
Anyways, I've come to realize that at the business side of things 'application' refers to much more than just the lines of code that get compiled into an executable. Especially since more and more businesses offer up their applications on websites, which has several advantages. (i.e. the user just has to remember the website. The business can update the application as desired, and the client doesn't have to download and install any of the updates... they'll see the changes when they go to that URL. My company apparently used to offer a .exe program that our customers downloaded and installed, but I believe we stopped doing that and now offer it as a web application.)
Which means that, from the business side at least, an 'application' refers to more than just the code that goes into it. It also refers to the various machines required in order to make the website work, to include the databases, the processing of requests, and more.
And we're still not quite at what I'm doing.
See, businesses need a process for developing and maintaining their application. Something like the Software Development Process. There's apparently a lot of different ways of doing this, and you can follow the link to explore more on that. Most have some variation on the basics - i.e. figuring out what the requirements are, coming up with the code to do it, then testing, testing, and more testing before finally releasing it into production. And really, that last stage might be considered the final test, as anyone who's had to deal with bugs after an update can attest.
Each of those phases need their own version of the application. They might not get the heavy traffic that the official application does, so they may not need the load balancers and multiple servers, but they still need a web server, database, machine for authenticating users, etc. In other words, you need to duplicate the entire environment.
Not only that, but occasionally issues come up with the current (live) version. The one in production. Maybe a vulnerability was discovered, whether in the business's program or a third party's software used by the business, and a patch needs to be applied. Maybe an issue came up after the latest version was released into production. Plus if you've divided up the labor, you may need one environment focusing on a particular part of the application (like billing, or the website), and another environment focusing on something else. Whatever the reason, you need to have multiple environments for every stage of development.
And this is where I come in. My official job title is "Technical Integration Engineer". We've got, I dunno... maybe 40+ environments involved. Each with at least four or five different applications. I say I don't know because some of them have been retired or aren't currently in use, so I can't really say how many there are altogether.
Each of them have to deal with reboots and what we call a 'build push', rebooting because (as you may have experienced) rebooting can clear out old data that causes errors and bugs otherwise, so it's good to regularly reboot your machines. And a build push? Well, if you've made some changes and want to test them, you have to incorporate them into your software build and then push them into the environment for testing. Then you can try doing all the actions a user would and see if it works or not.
I'm still very much at the beginner stage of my job, so right now most of it is about dealing with any sorts of issues with rebooting or pushing a build. It also means monitoring how much memory we're using and clearing out some of the older and more obsolete files if we start running out of space.
It often means working at the 'back end', that is... if someone in one of these environments is having a problem accessing a web page or performing an action, I'm checking logs or running scripts in the shell environment. Luckily, my predecessors have already created a bunch of scripts for our most common tasks. Mostly I'm just learning my way around, learning where to go to find the logs or scripts for which application in which environment, and what to do when one of the gajillion alerts comes up. (I learned about something called 'alert fatigue', and I think my organization really suffers from it. I've also spent a bit of time coming up with a system for my Outlook e-mail that I think is satisfactory. I already knew about creating rules to sort e-mail, but we get waaaaaayyyy too many of those for me to rely on. So I simplified it down and created some rules dividing things by environment, and sending all the automated e-mails to a couple of folders. Then I created some search folders so I can easily find any of those specific alerts or reports. Outlook was annoyingly unhelpful at doing some of the things I wanted to do, I'd love to place some of those Search folders near the folders related to whatever the alert was... and given that I repeatedly saw other people having the same wishes when I looked online for solutions, I think it's a pretty common desire... but I'm going to guess there'd be some complicated coding involved in doing so. Anyways... I put the ones the search folders I know how to address up in my favorites, so I can easily see when something new comes up.)
There's a lot more, of course. I'm still learning what various alerts and messages mean, and I'm sure I'll eventually be updating and/or writing my own scripts. I spend most of my time on the command line (or, well, with Linux it's the terminal for a Bash or Korn shell) and I'm getting pretty good at running commands like 'ps -ef | grep <xxxx>' to find whatever current processes have <xxxx> going on.
I think I can safely say that applications are a lot more than just the .exe file. That it contains the machines, third party software for synchronization or whatever, database queries, scripts, logs, and various methods for monitoring and alerting when issues develop... all of that, for each of the many, many environments...
And the back end is a complicated, complicated place.
Let's say you want to shop online, or transfer funds, or any of the zillion things we now do over the internet.
You open a browser on your phone, tablet, laptop or desktop and connect to a URL. In my previous series of posts I described how this gets translated into a series of messages that get routed to the 'front office' of a business, which then sends your information on to their fulfillment center or distribution center for processing.
There's a bit more to it than that. You see, the business will have one machine (or building) that responds with the webpage you requested, but in order to fulfill your request it needs to know a few things. Like your login info, and whether you're authenticated as the person authorized to view your account info. Then it needs to find your particular information (out of all the other people who have accounts there) and let you see yours, and yours alone. Plus there has to be a method for adding new customers, or removing old ones, and getting your billing information, and more.
So one machine may be dedicated to offering up the requested web page, and another machine may handle authenticating login information, and still another machine may hold the database with all your order history or transaction history, and still another may be secured more tightly because it holds everyone's billing information, and so on and so forth.
But wait, there's more!
If the business is reasonably large, it may have thousands or millions of people interacting with their websites on any given day. So they need a way of handling all that traffic. PLUS, people get pretty upset when a service isn't available. If they want to order something, or pay a bill, or whatever they want to do it Right. Now. and they aren't going to be pleased if your website crashes.
So businesses need redundancy, because you as an individual may survive if your hard drive crashes, but a business might not. So there are ways of having two or three machines acting like one web server, so that if one fails the other two can pick up the slack. And there are things called 'load balancers', that help make sure that all that traffic gets routed to the servers in an even manner. Otherwise, one server might get so overwhelmed that responds to requests more slowly, while another is sitting idly by.
But if you have two or three machines doing the same thing, you also have to make sure they're synchronized and share the same data sources. So instead of having a hard drive on one machine, you'll probably store everything a shared Storage Area Network (SAN), which will also have built in redundancy so that if one of the drives fails the data can still be recovered.
Oh, and you also have to worry about making sure that transactions happen once per request, and only once. That is, if something crashes while a request is being made you have to make sure that the request finishes processing (or doesn't process at all, undoing anything done before the failure). That way you don't get charged twice for the same order or something.
All of which sounds like a lot, when you think about it. But businesses are aware of all this and most of them have figured out how to make it happen (even if that sometimes involves outsourcing services, like using a cloud provider to manage your machines.)
Anyways. Three years ago if you had asked me what an application was, I'd probably have said it was something like Word for Windows, or Pokemon Go. You download some sort of file (most likely an .exe file, since it's an executable), install it, and it does stuff.
And learning to code meant I was much more aware of how complicated creating that .exe was. I mean, the Windows operating system has something like 50 million lines of code. Trying to figure out how all that fits together would be insane.
I knew that there had to be a way to allow multiple people to work together on a program that large, and I'd heard about things like GitHub, and understand the importance of version control. After all, I had the joy of trying to figure out why a change in one part of my program broke something in another part, and that was all just me. Trying to manage the efforts of five or ten different people all working on different parts of the program at the same time? That requires a good supporting structure in terms of tools (like GitHub), division of labor (who is responsible for which parts of the program), and procedures for deciding when something gets accepted into the official program.
Anyways, I've come to realize that at the business side of things 'application' refers to much more than just the lines of code that get compiled into an executable. Especially since more and more businesses offer up their applications on websites, which has several advantages. (i.e. the user just has to remember the website. The business can update the application as desired, and the client doesn't have to download and install any of the updates... they'll see the changes when they go to that URL. My company apparently used to offer a .exe program that our customers downloaded and installed, but I believe we stopped doing that and now offer it as a web application.)
Which means that, from the business side at least, an 'application' refers to more than just the code that goes into it. It also refers to the various machines required in order to make the website work, to include the databases, the processing of requests, and more.
And we're still not quite at what I'm doing.
See, businesses need a process for developing and maintaining their application. Something like the Software Development Process. There's apparently a lot of different ways of doing this, and you can follow the link to explore more on that. Most have some variation on the basics - i.e. figuring out what the requirements are, coming up with the code to do it, then testing, testing, and more testing before finally releasing it into production. And really, that last stage might be considered the final test, as anyone who's had to deal with bugs after an update can attest.
Each of those phases need their own version of the application. They might not get the heavy traffic that the official application does, so they may not need the load balancers and multiple servers, but they still need a web server, database, machine for authenticating users, etc. In other words, you need to duplicate the entire environment.
Not only that, but occasionally issues come up with the current (live) version. The one in production. Maybe a vulnerability was discovered, whether in the business's program or a third party's software used by the business, and a patch needs to be applied. Maybe an issue came up after the latest version was released into production. Plus if you've divided up the labor, you may need one environment focusing on a particular part of the application (like billing, or the website), and another environment focusing on something else. Whatever the reason, you need to have multiple environments for every stage of development.
And this is where I come in. My official job title is "Technical Integration Engineer". We've got, I dunno... maybe 40+ environments involved. Each with at least four or five different applications. I say I don't know because some of them have been retired or aren't currently in use, so I can't really say how many there are altogether.
Each of them have to deal with reboots and what we call a 'build push', rebooting because (as you may have experienced) rebooting can clear out old data that causes errors and bugs otherwise, so it's good to regularly reboot your machines. And a build push? Well, if you've made some changes and want to test them, you have to incorporate them into your software build and then push them into the environment for testing. Then you can try doing all the actions a user would and see if it works or not.
I'm still very much at the beginner stage of my job, so right now most of it is about dealing with any sorts of issues with rebooting or pushing a build. It also means monitoring how much memory we're using and clearing out some of the older and more obsolete files if we start running out of space.
It often means working at the 'back end', that is... if someone in one of these environments is having a problem accessing a web page or performing an action, I'm checking logs or running scripts in the shell environment. Luckily, my predecessors have already created a bunch of scripts for our most common tasks. Mostly I'm just learning my way around, learning where to go to find the logs or scripts for which application in which environment, and what to do when one of the gajillion alerts comes up. (I learned about something called 'alert fatigue', and I think my organization really suffers from it. I've also spent a bit of time coming up with a system for my Outlook e-mail that I think is satisfactory. I already knew about creating rules to sort e-mail, but we get waaaaaayyyy too many of those for me to rely on. So I simplified it down and created some rules dividing things by environment, and sending all the automated e-mails to a couple of folders. Then I created some search folders so I can easily find any of those specific alerts or reports. Outlook was annoyingly unhelpful at doing some of the things I wanted to do, I'd love to place some of those Search folders near the folders related to whatever the alert was... and given that I repeatedly saw other people having the same wishes when I looked online for solutions, I think it's a pretty common desire... but I'm going to guess there'd be some complicated coding involved in doing so. Anyways... I put the ones the search folders I know how to address up in my favorites, so I can easily see when something new comes up.)
There's a lot more, of course. I'm still learning what various alerts and messages mean, and I'm sure I'll eventually be updating and/or writing my own scripts. I spend most of my time on the command line (or, well, with Linux it's the terminal for a Bash or Korn shell) and I'm getting pretty good at running commands like 'ps -ef | grep <xxxx>' to find whatever current processes have <xxxx> going on.
I think I can safely say that applications are a lot more than just the .exe file. That it contains the machines, third party software for synchronization or whatever, database queries, scripts, logs, and various methods for monitoring and alerting when issues develop... all of that, for each of the many, many environments...
And the back end is a complicated, complicated place.
No comments:
Post a Comment