Selection of an Information Technology Platform

While trying to come up with a lasting solution to the urgent systems problem at IMT Custom Machine Company, Inc., Browning found out that the information flow started with marketing and customer interaction. Questions about specs in marketing necessitated consultation of a design engineer or another local expert. Moreover, all drafting of production requirements at Fort Wayne and Chicago was done on a CAD application system, which ran on the IBM mainframe in Fort Wayne and local IBM workstations in Chicago.

It was also found that the programmers in MIS had extensive backgrounds in COBOL and in RPG for the AS/400 with having no knowledge of UNIX operating system. However, to move to a common custom machine design systems, the business group management team saw it appropriate to institute a common machine design systems across all factories. The Germany program would be ported onto a UNIX workstation platform for support and distribution globally.

On new marketing and negotiation systems, the marketing group had proposed a reengineering “front-end information” system with capabilities to optically scan in all customer proposals, and customer specs analyzed and processed more quickly. Moreover, the field sales group had plans to implement a new customer relationship management application to boost the transferring order information to the factories. New software design tools had been planned though no specific software had been finalized. There also was a proposal to freely replace the Bills of Material Systems with another that would run on the IBM mainframe.

Therefore, the options that were proposed included; a move toward a centralized, likely IBM computing environment thereby commit to staying with the mainframe for all important applications, and allow the use of Linux on the mainframe. This platform would run the OS/400, IAX and Linux operating systems. Moreover, the platform would optimize the use of the lower cost, energy efficient mainframe as well as a central usage support and control. However, the older packages would be phased out within 5 years and almost all the computation work would be done on the mainframe.  

Centralization ensures that the Information Technology projects and long-term goals complement the company’s business strategy. Moreover, the centralized network ensures a high degree of security for it prevents attacks aimed at the server because the client machines are different from the server and cannot communicate information with probable attack. Further, the centralized system would allow monitoring of multiple sites from a central command and control communications center, thus increasing the chances of mitigating company liability in  case of a crisis (Chew & Gottschalk, 2009).

The corporate decision makers would also be allowed enhanced control over time-critical scenarios and the isolation of special access categories would be valuable. However, centralization brings a certain formality into its relationship with the various business units, thus creating a distance (Khosrow-Pour, 2006). Additionally, the client has minimal responsibility in keeping their data secure. It is also hard to cost-effectively integrate systems from various manufacturers. There is also the likelihood of personnel resisting change.

The workstation computing would require a complete phase out of the mainframe over time. This would allow a shift to full client-server environment. The relational database server cluster would serve the whole UNIX network system, though LAN’s would exist as necessary (Hennessy & Patterson, 2012). However, this approach is more expensive and has a higher probability of giving rise to disintegrated computing environment with increased chances of reassertion of central control in a few years.

The third proposal involves virtual computing, whereby the company would outsource the management of its servers to a data center hosting company that would set up and manage “virtual machines” for IMT. This strategy would entail abandonment of the mainframe, hence a conversion of the complete computing platform to a Linux environment (Agrawal, Biswas & Nath, 2014). Although risky, this strategy eliminates the need for additional computer hardware investment and the company would pay only for what they need. Further, virtual computing lowers the upfront costs thereby lowering the infrastructure costs.

Moreover, it gets easier to grow the company’s applications. The company also pays for what they use, and all systems are managed under SLAs. Virtual computing is environmentally beneficial as lower carbon is emitted since users efficiently share large systems. However, they entail greater dependency on service providers. There is also a risk of getting locked into proprietary or vendor-recommended systems, which may make it hard to migrate to another system or service provider. Further, it becomes risky if the provider stops supporting a system the company depends on. The system largely relies on internet connection and there is a potential security and privacy risk of trusting a provider with valuable company data.

The wait and watch strategy would require no fundamental changes to the current system and a change in specific systems would be dictated by circumstances. Decisions would be made according to immediate demands. This system however, would lead to the same problems and more committees hence would not be appropriate to solve information technology issues in the company.

Alternatively, the vice president could opt for a desktop virtualization, which is based on a thin-client computing model. The virtual server desktop machine handles all the operating system, storage and application processing related tasks (Agrawal et al., 2014). The thin-client endpoints utilize a local software, though some use a zero-client approach, which requires no software at all. The client end-points boot in firmware without local storage and connect directly to the desktop server.

This platform saves the company money by reducing the endpoint hardware requirements and the computers can serve as thin clients without modification, thus increasing their normal life cycles. Moreover, the system uses lower energy since hardware requirements are low. In case of a breakdown, only the faulty thin-client PC is replaced, thus reducing time and expense. Further, a user can log into a virtual desktop from any endpoint, without software installation or data copying. This platform simplifies disaster recovery planning in addition to lending itself to enterprise security (Agrawal et al., 2014).

The recommendations would require an evaluation of the Business Process Reengineering and implementing the Systems Development Life Cycle. Moreover, it would be helpful to create an organizational Standard Operating Procedure as well as using the WAN consisting of interconnected LAN’s to share information in a networking structure. In implementing the System Development Lifecycle, a system would be put I place for users through training, documentation, conversion and post-conversion activities (Milton & Neto, 2014). 

The software license optimization efforts ensure cost predictability, budgeting of software and procurement in a bid to optimize information technology and spend the return on investment. The company staff would get real-time dashboards to simplify capacity and configuration problems as well as performance identification and troubleshooting. Moreover, the system would maximize space and decrease the licensing costs.

 

Leave a Reply