ads

Style6

Style3[OneLeft]

Style3[OneRight]

Style4

Style5[ImagesOnly]

Style2

Review: IT Savvy by Peter Weill



Book Review:  IT Savvy: What Top Executives Must Know to Go from Pain to Gain by Peter Weill.

Peter Weill offers a simple digest on a complex topic:  Best practices and methodology for organizational information and communication.   Weill recognizes most organizations are IT challenged.  However, the right strategy can transform IT from a liability to an asset.

According to Weill, organizations that consistently use IT to elevate performance are IT Savvy. His research suggests that organizations that invest in IT savvy strategies have margins 20% higher than industry average. Organizations without an IT savvy strategy have margins 32% lower.

The IT Savvy model:


  1. Define your Operating Model.
  2. Revamp your IT funding model.  IT will support your operating model.
  3. Build a digitized platform of business processes.  Standardize process that are not going to change. Concentrate on elements that do change. 
  4. Exploit the digitized platform for growth.  Use IT to drive value and extract business. 

Case studies include situations from Aetna, Pfizer, Seven-Eleven Japan (SEJ), and UPS.  For example, Weill demonstrates how SEJ transformed their IT services from strategic liability into an asset.  Examples also include IT funding models, outcome-oriented business cases, and transparency innovations.

IT Savvy is an easy to read and informative book about IT execution.  This is a great book for executives, management, and IT professionals.  Weill provides practical strategies on how to implement IT process improvement throughout the organization.


Assessment Model of BYOD: Adoption of Personal Devices at the Workplace

Brief History of Mobile Technology; BYOD Methodology

by Steven Jordan, on December 16th 2013.



Chapter II:  Literature Review

     BYOD refers to personal devices that connect to corporate networks.  BYOD may risk concomitant threats to vulnerable corporate systems.  BYOD policy is a network strategy that manages employees’ personal devices.  Companies without BYOD policy may be unprepared as employees overwhelm network resources with smart phones, tablets, and laptops.

          This literature review contributes to the adoption process of BYOD policy.  The adoption process is an initiation phase that consists of “gathering information, outlining and planning” (Bouman, et al., 2005).  Managers and network administrators may use it as reference to support decisions on whether to implement, or reject BYOD policy.

     This literature review explores the state of BYOD technology in three areas:  (a) historical influences of workplace technologies; (b) qualitative risk and benefit analysis for personal technology at the workplace; and (c) exploration on the selection of BYOD methodology.
  
Background

This literature review explores the origins of BYOD in chronological order, and is defined by four significant events: (a) Moore’s Law, as it relates to workplace technology; (b) Moore’s Law for Power Consumption; (c) Koomey’s Law, as it relates to mobile efficiency; and (d) Grove’s Law, as it relates to bandwidth controls.

     Moore’s Law.  Gordon Moore established the Intel Corporation in 1968 (Intel, 2013).  Moore’s Law is based on his prediction that states, “The number of transistors incorporated in a chip will approximately double every 24 months” (Intel, 2013).  More’s Law is specific to chip complexity, but it is an approximation for all components within a computer system (Koomey, Berard, & Sanchez, 2011, p. 47).  Perpetual innovation of computer systems has changed the way people work.

     Moore’s Law has influenced corporate computing for nearly a half century.  The first punch card tabulator was invented in the late 1880s and was used to automate U.S. census data (Carr, 2008, p. 45).  Punch cards were common by the 1930s (Carr, 2008, p. 47).  In the 1970s employees worked with terminals and datacenter mainframes (Carr, 2008, p. 52).  In the 1980s employees transitioned to desktop PCs (Carr, 2008, p. 55).  Modern workstations have become standard office technology.

      Moore’s law for power consumption.  The popularity of the workstation has created an imbalance between consumption and efficiency.  Workstations use an average 25% of their processing potential; storage capacities average under 50% utilization (Carr, 2008, p. 56).  Electricity is wasted when resources remain idle.  The workstation model is inefficient because it wastes scarce resources.

     Wu-Chen Feng introduced, “Moore’s law for power consumption – that is, the power consumption of computer nodes doubles every 18 months” (Feng, 2003).  Each generation of computer chips consumes more energy and generates more heat (Carr, 2008, p. 57).  Heat reduces computer component reliability as failure rates double with every 18°F increase in temperature (Koomey, Berard, & Sanchez, 2011, p. 49) (Feng, 2003).  Heat is especially harmful to lithium-ion powered gadgets (i.e., smartphones) (Herman, 2011).  Heat causes the average smart phone to lose 35% of its battery capacity per year (Herman, 2011).

     Moore’s law for power consumption presents an obstacle to mobile computing: (a) computers have an insatiable appetite for power consumption; and (b) heat has a negative impact on mobile efficiency.  The amount of energy required to operate PCs does not scale for mobile computing.  As a result, demand for power exceeds the available supply.  Functional scalability for mobile devices requires innovations in efficiency.

     Koomey’s Law.  Consumption and efficiency are important distinctions.  Koomey’s Law states that electrical efficiency of computations “doubled about every 1.5 years (Koomey, Berard, & Sanchez, 2011, p. 52).  Alternatively, the ratio of power per computation decreases 50% every 1.5 years (Koomey, Berard, & Sanchez, 2011, p. 52).  Koomey’s Law outlines two potential outcomes in regard to computational innovation:  (a) computational capability increases with no change in power consumption; or (b) no change in computational capability with decreases of power consumption.

     Simultaneous increases for power consumption and efficiency are ostensibly at odds. Both models scale well because each variance has different implications.  Consumption is insignificant for workstations because electric outlets supply power.  Efficiency gains are never realized while workstations consume power as their resources remain idle.  On the other hand, mobile devices are battery operated.  Efficiency benefits mobile devices because of their limited supply of power.  Efficiency gains are revolutionary for battery powered mobile devices (Koomey, Berard, & Sanchez, 2011, p. 50).  For example, assume a smart phone manufactured in 2013 will operate for 10 hours.  According to Koomey’s Law, a smart phone manufactured in 2016, with a similar CPU, will operate for 20 hours. Smart devices are available because of efficiency innovations.

     Grove’s Law.  Mobile devices require efficient power to operate.  Mobile devices also require sufficient bandwidth to be useful.  Grove’s Law says, “Telecommunications bandwidth doubles only every century” (Carr, 2008, p. 58).  Claude Shannon’s Information Theory developed the concept of bandwidth.  Shannon’s information formula calculates the maximum rate that data can be sent without error (Hardesty, 2010).

     Shannon’s Information Theory was developed in 1948 (Shannon, 1948).  It took nearly a half a century until large volumes of information (i.e., bandwidth) could be transferred over long distances.   Communication infrastructure was built upon copper cables (Carr, 2008, p. 57).  Data travels across copper cables in the form of alternating current.  Sign waves graph the positive and negative oscilations associated with alternting current (Odom, 2006, p.170).  Freaquency is a sign wave measurement that counts the number of contiguous oscilation cycles per second (i.e. alternating currnet) (Odom, 2006, p.22).  For example, 3400 cycles per second, indicates a frequency of 3400 Hetrz (Hz).  Incidentally, analog traffic uses the frequency range of 300 to 3400 Hz (Cisco, 2012).  The 3400 Hz frequency correlates with the 33.6 Kilobits per second (Kbps) analog modem; and demonstrates bandwidth is proportionate to frequency.  Copper cable restricted most commercial data transmission to the 300 to 3400 Hz frequency range until the 1990s (Cisco, 2012).

     Modern telecommunication infrastructure has “repealed Grove’s Law” (Carr, 2008, p. 60).  Internet fueled growth provides an abundance of fiber optic cable throughout the country (Carr, 2008, p. 59).  Fiber optic cable is an alternative to copper cable for data transport.  Copper cables use alternating currents to transport data.  Fiber optic cables use pulses of light to transport binary (i.e., digital) data (Odem, 2006, p. 149).

     Fiber optic cables differ from copper cables because they operate at higher frequencies (i.e.,  higher bandwidth capacity).  Long-haul copper cables have a maximum frequency of 100 MHz per km (Gambling, 2000, p. 1091). The bandwidth of long-haul copper cable is nearly 10 Megabits per second (Mbps).  Until 1992, fiber optic cables had a maximum frequency of 1000 GHz per km (Gambling, 2000, p. 1089).  The bandwidth of long-haul fiber optic cable is nearly 20 Gigabits per second (Gbps).  There is a 10,000 improvement factor from the introduction of fiber optic cable.  The invention of the erbium fiber amplifier (EDFA) in 1987, significantly increased existing fiber optic bandwidth capacity (Gambling, 2000, p. 1089).  Fiber optic cables, when amplified with EDFA, has a frequency of 5000 GHz per km.  Information pulses at 100 Gbps “over 1,000,000 km with zero error” (Gambling, 2000, p. 1089).

     Grove’s Law transcends bandwidth innovation from cables to the airwaves.  Copper and fiber optics transmit data using electrons and light (Odem, 2006, p. 152).  Wireless media uses complex analog radio waves to transmit data (Odem, 2006, p. 153).  Wireless frequencies encompass a wide scope of services: (a) LANs, (b) metropolitan-area networks (MANs), and (c) wide-area networks (WANs) (Froom, Sivaasubramanian, & Frahim, 2010, p. 425).

     Wireless LAN, MAN, and WAN services operate within the 2.4 GHz to 5 GHz range (Froom, et al., 2010, p. 424).  Wireless network technology was first introduced to the public in 2001 (Standage, 2004).  The Institute of Electrical and Electronic Engineers (IEEE) publish standards that outline wireless technologies (Table 1) (IEEE, 2013).  IEEE standards document substantial increases of wireless bandwidth.  The broadband revolution has begun to take shape and current designs favor mobility.


Risk-Benefit Comparison

     The literature review examines two potential effects of BYOD on an organization:  (a) advantages, and (b) disadvantages.

     Advantages.  ICT departments can be viewed as an institutional process that contributes value to organizations (Brynjolfsson, 2003).  The Alcohol and Tobacco Tax and Trade Bureau (TTB) reduced costs and increased security with their remote access thin client solution (Hughes, 2012).  The TTB policy prevents employees from storing sensitive data on personal property (Hughes, 2012).  TBB’s remote terminal solution reduced legal and compliance complexities (Hughes, 2012).

     Quantifying the value of ICT (e.g., BYOD) is difficult, but not impossible (Brynjolfsson, 2003).  Colgate-Palmolive estimates their BYOD policy saved over $1 million per year by eliminating BlackBerry corporate licenses (Hof, 2011, p. 2).  The savings were realized after BYOD policy allowed personal devices access to corporate email (Hof, 2011, p. 1).

     Organizations can also benefit from with returns on productivity and competitiveness (Brynjolfsson, 2003).  For example, Hyundai incorporates smart phones as part of their manufacturing process (BusinessKorea, 2013).  Workers share multimedia message service (MMS) text messages when defects are discovered on the production line (BusinessKorea, 2013).  Hyundai’s smartphone innovation increased overall production output (BusinessKorea, 2013).

     Disadvantages.  Wireless access points with weak encryption can expose organizations to external hacking attempts (Cisco, 2010, p. 180).  Risk is also introduced when an employee unknowingly connects a compromised device to the corporate network.  Smart devices can introduce malware that targets network equipment and servers (Donohue & Stewart, 2010).

     There was a 155% increase in mobile malware across all smartphone platforms from 2010 to 2011 (Juniper, 2012, p. 6).  There was an additional 614% increase in mobile malware from 2012 to 2013 (Juniper, 2013, p. 15).  Similarly, organizations are at risk when employees copy sensitive corporate data to their personal devices (Juniper, 2013, p. 18).  Statistics based on remote management applications indicate that 17% of mobile devices are lost or stolen on an annual basis (Juniper, 2013, p. 18).

     There are circumstances when BYOD policy exposes the privacy of its employees (Barnes, 2013).  Employees may unknowingly provide their employers with administrative control of personal devices (Barnes, 2013).  Employers gain control when employees use their personal devices to check corporate email (Barnes, 2013).  In theory, employers can read private emails (e.g., Gmail) and view personal pictures (Barnes, 2013).  Furthermore, employers have the ability to remotely wipe any smartphone that synchronizes with corporate email services (Juniper, 2013, p. 18).  There are inherent risks for both employers and employees.

Methodology Models

     Methodology provides the processes, assessments, and analysis necessary to determine if technology management facilitates company goals.  The literature review examines three ICT principles of (a) innovation diffusion, (b) general risk management, and (c) organizational design.

     ICT Diffusion. ICT is the science of organizations and technology.  ICT research explores the dissemination of innovations throughout the workplace.  The employee practice of BYOD is innovative because it changes the way people work.  Each step of the diffusion process is identified and documented.  There are four steps to innovation diffusion:

1. The adoption process identifies the need for innovation or change (Bouman, et al., 2005, p. 58).  Adoption includes information gathering and team building.

2. The implementation process puts a plan into action.  The broad approach identifies the whole diffusion process, adoption through effects, as a single implementation process (Bouman, et al., 2005, p. 92).


3. The users process identifies stakeholders.  Users can include individuals, groups, and organizations (Bouman, et al., 2005, p. 94).   For example, individuals use personal devices, and the organization uses BYOD policy.

4. The effects process examines the complete diffusion process.  Analysis provides aggregated results based on process observations.  Results can be expressed as qualitative generalizations or quantitative statistics (Bouman, et al., 2005, p. 117).
  
General Risk Management.  Network risk management is a loss control process.  Risk management is designed to assist decision makers:

1. Identify company assets (White, 2011, pp. 482).  Assets are company resources that are vulnerable from threats (White, 2011, p. 482).

2. Identify network threats (White, 2011, p. 482).  Threats are anything that causes harm to a company asset (White, 2011, p. 482).  NIST publishes a comprehensive list of threat events (NIST, 2012).

3. Identify system vulnerabilities (White, 2011, p. 482).  Vulnerabilities, are root conditions that exposes assets to harm (White, 2011, p. 482).  NIST publishes a comprehensive list of vulnerabilities (NIST, 2012).

4. Estimate the likelihood of an exploit (White, 2011, p. 482).  Likelihood estimates the probability that a threat will exploit a vulnerability (i.e., compromise the production servers) (White, 2011, p. 483).  Likelihood is determined with a risk assessment matrix.

5. Estimate the impact from a harmful event (White, 2011, p. 483).  Impact estimates the loss experienced from a vulnerability that is exploited by a threat (White, 2011, p. 483).  NIST publishes a comprehensive list of adverse impacts (NIST, 2012).

6. Estimate risk through a qualitative risk management matrix.

     Risk is estimated by multiplying vulnerability, impact, and likelihood:  R = V x I x L (Brock, 1999).  The assessment formula is calculated with the risk assessment matrix (Table 2).  The assessment team determines the risk matrix likelihood values.  Choosing the likelihood values requires majority quorum.  The assessment team assigns one risk value to each vulnerability: (a) high risk, (b) medium risk, (c) or low risk.


Organizational Design.

     The Star Model for Decision Making is an organizational design.  The Star Model outlines the problem in common language, forces designs based on long-term goals, and provides decision makers a series of understandable choices (Kates & Gakbraith, 2007, p. 2).  The approach begins by identifying the strategic goal.  Proceeding steps outline the goal’s structure, processes, incentives, and people.  The Star Model asks five main questions:  (a) What is being done?  (b) Who is doing it?  (c) Why are they doing it? (d) How are they doing it?  And, (d) should it be done? (Figure 1)  (Malone, Laubacher, & Dellarocas, 2010).


Figure 1.  Star Methodology outline.

     This literature review concludes organizations will benefit from a network risk assessment process.  The recommendation is based on (a) the historical developments in technology; (b) examination of potential benefits and risks; and (c) BYOD methodology processes.

     History.  The use of personal technology in the workplace is a modern phenomenon.  Personal devices are possible because of recent innovations of power efficiencies and bandwidth.  BYOD is prevalent as a results from technology influencing use.

Benefits and risks.  Mobile personal devices are common tools.  Analysis indicates that organizations can benefit from financial, efficiency, and productivity gains.  On the other hand, personal devices can introduce threats to vulnerable system resources.

Methodology Processes.  There are various methodology processes that can assist organizations assess the potential benefits and risks introduced from mobile personal devices.

Chapter III:  Methodology

     The infrastructure goal states that production servers must be available to customers. The network has a successful record for continuous operations.  To date, customers have not experienced major disruptions of services.  Previous successes may be attributed to the collective knowledge and experience of the organization's ICT staff.  In any case, conjectural mitigation is not a prudent strategy.  New security controls are required because employees connect their personal devices to the company network.  Consequently, network threats may manifest as smart devices connect to the corporate network.

     The organization employs a sophisticated network but its mitigation resources are mostly undocumented.  Existing network security processes are unproven propositions because they are based on incomplete information.  Unfounded assumptions, “can lead to broken, misconfigured, or bypassed security mechanisms” (Cisco Press, 2010).  An effective network assessment allows companies to make informed decisions.

Methodology Overview

     This study seeks to align the use of employee personal technology with business strategy.  Methodology provides the processes, assessments, and analysis necessary to determine if technology management facilitates company goals.  It proposes a synthesized methodology, the ICT Risk Assessment Model (IRAM) which provides an in-depth understanding of BYOD policy through a process of systematic planning.  The IRAM model is based on three ICT principles of (a) innovation diffusion, (b) general risk management, and (c) organizational design (Figure 2).  Each principle uniquely contributes to the IRAM methodology goal.  Innovation diffusion provides IRAM with a framework through four diffusion phases.  Risk management identifies risk conditions and uses a qualitative assessment for evaluation.  Organizational design introduces a logical and straightforward interpretation.  Decision makers will benefit from a pithy interpretation.  

     


ICT Diffusion

     Innovation diffusion is the first phase of the IRAM methodology process.    Each step of the innovation diffusion process, (i.e., adoption, implementation, use, and effects) is documented:

1. This study identifies BYOD policy as the candidate for change within the organization.  Team participants will include those most familiar with network operations; system administrators and management.

2. This study uses a narrow interpretation of implementation and emphasizes the design and development.  The focal point for this implementation phase centers on the risk management assessment.

3. This study identifies users as stakeholders.

4. This study takes a narrow interpretation of effects and defers specific analysis to the IRAM organizational design process.  The completed analysis will determine if BYOD policy aligns with company goals.

Risk Management

     Risk management is the second phase of the IRAM methodology process.  Data attributes are identified and applied to the risk assessment.  Likelihood and impact are calculated by proxy of risk assessment:

1. This study identifies the production servers as the primary assets.

2. This study uses a broad interpretation of threats, and identifies four potential events: (a) changing data, (b) deleting data, (c) stealing data, and (d) disruption of services.  Future research may include a narrow scope for threat identification:  For example, viruses, Trojan Horses, worms, and Denial of Service (DoS) attacks.

3. This study uses a broad interpretation of vulnerabilities, and identifies four potential conditions:  (a) infrastructure design, (b) applications, (c) operations, and (d) people.  Future research may include a narrow scope of vulnerabilities:  For example, firewalls, custom macros, policies and procedures, and accidents.

4. Likelihood is expressed in qualitative format during the risk assessment.

5. This study uses a broad interpretation of impact and identifies three potential conditions:  (a) data confidentiality, (b) data integrity, and (c) data availability.  Future research may include a narrow scope of impact:  financial losses, customer losses, etc…

6. The Network assessment team identifies risk using the risk assessment matrix (Table 2).

Organizational Design

     The Star Model for Decision Making encapsulates IRAM methodology within a simple framework.  Star Model questions are framed according to the project scope.  The results formalize the IRAM methodology into two formats: (a) pithy report, and (b) tabular reference (Table 3).

IRAM Methodology

1. What is the goal?  Data integrity, confidently, and reliability are at risk from the combination of vulnerabilities and threats.  The goal is to reduce or prevent the likelihood of production server exploitations.  The IRAM goal aligns with the adoption process of diffusion because an innovation has been identified.

2. Who is at risk?  The organization stakeholders are at risk from vulnerabilities and threats.  The stakeholders are the production processes, data, and systems.  Stakeholders are participants in the usage process of diffusion.

3. Why are the production systems at risk?  Production servers are vulnerable from a wide scope of interactions with infrastructure, applications, operations, and people.  Vulnerabilities are risk conditions that source from the implementation process of diffusion.

4. How are the production servers at risk?  Circumstances and events can harm production servers with threats of data changes, data theft, data disruption, and data destruction.  Threats are closely related to vulnerabilities, and both components align with the implementation phase of the diffusion process.

5. Should the risk be mitigated?  The IRAM risk assessment matrix estimates the effects and likelihood for vulnerabilities.  Network operators will use the assessment to determine whether controls are needed to mitigate the potential impact from risks.  Risk assessments align with the diffusion process of effects.



Data Analysis

Decision makers can use the IRAM methodology process to help determine if BYOD is appropriate for their organization.  System areas that denote high risk require mitigation.  Medium risk deserves substantial consideration.  Mitigation may be optional for low risk areas.

     This study recommends a detailed qualitative mitigation assessment for systems that require mitigation.  Qualitative mitigation assessments assign monetary values for assets (i.e., production servers) and mitigation processes (i.e., firewalls, anti-virus software, etc…). Ultimately, organizations must decide if the benefits of BYOD is worth the potential risks.


References

Barnes, N. M. (2013, September 26). BYOD: balancing employee privacy concerns against employer security needs. Retrieved from Association of Corporate Counsel: http://www.lexology.com/library/detail.aspx?g=1109490a-6895-40f0-a7a3-afc714316165
Bouwman, H., Dijk, J. van, Hooff, B. van den, and Wijngaert, L. van de (2005). Information & Communication Technology in Organizations. London:  SAGE Publications.
Brynjolfsson, E. (2003, July). The IT Productivity Gap. Optimize Magazine (21). Retrieved from http://ebusiness.mit.edu/erik/Optimize/pr_roi.html
BusinessKorea. (2013, November 22). Reason for Increasing Recalls. Seoul, Korea. Retrieved from http://www.businesskorea.co.kr/article/2238/reason-increasing-recalls-use-smartphones-during-work-hours-emerging-significant
Carr, N. (2008). The Big Switch. New York: W. W. Norton & Company, Inc.
Chen, B. X. (2013, May 1st). Cellphone Thefts Grow, but the Industry Looks the Other Way. New York Times, p. A1. Retrieved from http://www.nytimes.com/2013/05/02/technology/cellphone-thefts-grow-but-the-industry-looks-the-other-way.html?_r=0
Cisco. (2010). 6.4.3 Wireless Security Solutions. In Cisco, CCNA Security Course Booklet (p. 180). Indianapolis, IN: Cisco Press.
Cisco. (2012). BYOD and Virtualization Survey Report. Indianapolis: Cisco IBSG. Retrieved from http://www.cisco.com/web/about/ac79/docs/BYOD.pdf
Cisco. (2012, October 16). Digital Subscriber Lines. Retrieved from Cisco Systems, Inc.: http://docwiki.cisco.com/wiki/Digital_Subscriber_Line
Craig-Wood, K. (2012, April 26). Energy-efficient cloud computing: Jevons Paradox vs. Moore’s Law. Retrieved from Mesmet Blog: http://www.katescomment.com/
Donohue, D., & Stewart, B. (2010). Campus Network Security. In CCNP Routing and Switching Quick Reference (p. 191). Indianapolis, IN.: Cisco Press.
Feng, W.-c. (2003, October 1). Making a Case for Efficient Supercomputing. Queue - Power Management, 1(7), p. 54. doi:http://dl.acm.org/citation.cfm?doid=957717.957772
File, T. (2013). Computer and Internet Use in the United States. Washington DC: U.S. Census P20-569. Retrieved from http://www.census.gov/prod/2013pubs/p20-569.pdf
Fortinet. (2013, October). Fortinet Internet Security Census 2013. Retrieved from http://www.fortinet.com/sites/default/files/surveyreports/Fortinet-Internet-Security-Census-2013.pdf
Froom, R., Sivaasubramanian, B., & Frahim, E. (2010). Implementing Cisco IP Switched Networks (SWITCH). Indianapolis: Cisco Press.
Gambling, W. A. (2000, Nov-Dec). The Rise and Rise of Optical Fibers. IEEE Journal on Selected Topics in Quantum Electronics, 6(6), 1077-1093. doi: 10.1109/2944.902157
Glanz, J. (2012, September 22). The Cloud Factories: Power, Pollution and the Internet. Retrieved from The New York Times: http://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-energy-belying-industry-image.html?pagewanted=1&_r=1
Hardesty, L. (2010, January 19). Explained: The Shannon limit. Retrieved from Massachusetts Institute of Technology News: http://web.mit.edu/newsoffice/2010/explained-shannon-0115.html
Herman, J. (2011, September 21). Why is My Phone So Hot? Popular Mechanics. Retrieved from http://www.popularmechanics.com/technology/how-to/tips/why-does-my-phone-get-so-hot
Hof, R. (2011, August 15). Bring Your Own Device. Retrieved from MIT Technology Review: http://www.technologyreview.com/news/425009/bring-your-own-device/
Hughes, R. (2012, August 13). Allowing Bring Your Own Device with Minimal Policy or Legal Implications. Retrieved from The White House: http://www.whitehouse.gov/digitalgov/bring-your-own-device#ttb
IEEE. (2013, December). IEEE Std 802.11. Retrieved from IEEE Standards Association: http://standards.ieee.org/findstds/standard/802.11-2012.html
Intel. (2013, October 5). More's Law and Intel Innovation. Retrieved from Intel: http://www.intel.com/content/www/us/en/history/museum-gordon-moore-law.html
Juniper Networks. (2012, February). 2011 Mobile Threats Report. Retrieved from Juniper Networks: http://www.juniper.net/us/en/local/pdf/additional-resources/jnpr-2011-mobile-threats-report.pdf
Juniper Networks. (2013). Juniper Networks Third Annual Mobile Threats Report. Retrieved from Juniper Networks: http://www.juniper.net/us/en/local/pdf/additional-resources/jnpr-2012-mobile-threats-report.pdf
Koomey, J. (2011, February 13). A fascinating encounter with advocates of large rebound effects. Retrieved from Jonathan G. Koomey, PHD.: http://www.koomey.com/post/3286897788
Koomey, J., Berard, S., & Sanchez, M. (2011, July-September). Implications of Historical Trends in the Electrical Efficiency of Computing. 33(3), pp. 46-53. doi:http://doi.ieeecomputersociety.org/10.1109/MAHC.2010.28
Odom, W. (2006). Networking Basics. Indianapolis: Cisco Press.
Owen, D. (2010, December 20). Annals of Environmentalism the Efficiency Dilemma. The New Yorker, 78-79. Retrieved from http://www.newyorker.com/reporting/2010/12/20/101220fa_fact_owen
Pew Internet. (2013, October 18). Pew Internet and American Life Project. Retrieved from Tablet and E-reader Ownership Update: http://pewinternet.org/Reports/2013/Tablets-and-ereaders/Findings.aspx
Shannon, C. E. (1948, July, October). A Mathematical Theory of Communication. The Bell System Technical Journal, 27, 379-423, 623-656. Retrieved from http://web.mit.edu/persci/classes/papers/Shannon48.pdf
Standage, T. (2004, June 12). A brief history of Wi-Fi. The Economist. Retrieved from http://www.economist.com/node/2724397/print
Troianovski, A. (2012, April 3). Optical Delusion? Fiber Booms Again, Despite Bust. Retrieved from The Wall Street Journal: http://online.wsj.com/news/articles/SB10001424052702303863404577285260615058538
White, G. (2011). Security+ Certification. In G. White, Security+ Certification (pp. 477-4994). Emeryville: McGraw-Hill.


A Brief History of the Internet of Things

Introduction

During the Internet’s brief history there have been 4 major phases that have had an impact on humanity (Evans, 2001).
  1. Academia.  Primary use allowed universities to interconnect.
  2. Static content.  Simple web pages provided limited content to the public.
  3.  Dynamic content.  Business transactions were possible.  Online banking and shopping became the norm.
  4. Social networks.  Internet has become ingrained as the social norm.  Regular interactions with friends and family occur with online services such as Facebook, Twitter, and Google.
Throughout the Internet’s evolution its primary function has remained consistent; provide people information.  The Internet is on the cusp of a major transformation that will change its purpose from serving humanity to technology.  This new concept is referred to as the Internet of Things (IoT).

History

When the personal computer became part of mainstream culture in the 1980s, individuals began to rely on those machines to perform tasks previously done by humans. The advent of networked computers allowed people to communicate with each other as never before, first to send rudimentary messages, then over the Internet via the World Wide Web to exchange goods and services and to establish social networks. As these interactions have evolved, they have become increasingly complex and connected users in an ever-expanding number of ways (e.g., ordering pizza via a website rather than a landline telephone); at the same time, these technologies become part of daily life, no longer unique, and make users’ lives easier.
Weiser, who developed the IoT concept, envisioned a world in which users’ lives revolved almost completely around connected technologies that faded into the background of their daily routines (1991). As head of the Computer Science Laboratory at the Xerox Palo Alto Research Center, Weiser pioneered the concept of ubiquitous computing. That vision has evolved into what is now the framework for IoT. In the near future everything will contain transmitters and receivers.   Billions of people and objects will be interconnected.  That vision has evolved into what is now the framework for IoT.

IoT Blueprint

The basic premise of Weiser’s IoT theory is that, over time, humanity’s everyday tools will contain sensors that connect each other, transmitting and receiving information. Information measured can be related to time, (e.g., movement of an object, time of day), place (e.g., at the PC, indoors or outside) and/or the thing itself (e.g., human-to-human or computer-to-computer interaction) (ITU 2005).
Ley (2007) explains that, for items to be connected, they must have their own identities. “In order for objects and devices to usefully become part of a wider intelligent, information sharing network, it is vital that each one has a unique identity. This not only enables more things to be interconnected, it also means that objects that surround us can become resources and act as interfaces to other resources” (p. 65).
In their 2005 executive summary on IoT, the International Telecommunications Union (ITU) outlined IoT in three steps. The first step of IoT is to actually connect these tools to large databases and networks and then to the Internet, the greatest network of networks (ITU, 2005). Radio-frequency identification (RFID) provides the ideal solution—it is an inexpensive, cost-effective and simple way to process a wide variety of data from a range of devices.

Radio Frequency Identification

Radio frequency identification (RFID) is a generic term to describe the technology that utilizes radio waves to identify items (Ley, 2007). RFID-based systems can provide real-time tracking information.  RFID technology is widely used in tags that can collect information and then transmit it to computer systems (e.g., shipping information, supply chain management, toll road transponders, “chipping” pets). Ley describes these tags in detail.
There are two main types of RFID tags: passive (energy harvested from the reader) and active (with their own power supply). The more sophisticated tags offer read/write capabilities. RFID chips can be as small as 0.05 mm2 and can be embedded in paper. More recently, printable tags have been developed. RFID systems do not require line of sight and work over various distances from a few centimetres to 100 metres depending on the frequency used and type of system. Standards for tags and electronic product codes (EPC) are being overseen by EPC Global. (p. 66)
The second step of IoT is to use sensor technologies to interpret the information collected via RFID and interpret it, detecting changes in the physical status of things (ITU, 2005). Sensors are crucial to making IoT function. They serve as the “human” element in the process, detecting changes much the way the body’s systems would and initiate responses in the technology accordingly. In other words, “sensors play a pivotal role in bridging the gap between the physical and virtual worlds, and enabling things to respond to changes in their physical environment” (ITU, 2005, p. 4).
The third step is the rapid expansion of nanotechnology, allowing RFID-enabled sensors to be installed in smaller and smaller places. Such advancements have enabled the development of a nearly unimaginable array of smart devices, from phones to credit cards to QR codes to home security systems that can be remotely activated through a smart device.

IoT Dependencies

Jeff Apcar works for Cisco Advanced Services as a Distinguished Services Engineer.  Apcar explains that there are 2 major considerations regarding the progression of IoT (Apcar, 2011): 
  1. Physical limitations:  Size, available memory, CPU power, and power.
  2.  Logical limitations: 
In order to transform regular objects into smart-objects there must be a standardization.  Apcar explained that the next iteration of the Internet must incorporate an IP address into each smart-object.  When objects have an IP address they can be organized into a network.
IoT cannot be implemented with the current IPv4 addressing scheme.  There are over 6 billion people in the world yet IPv4 provides approximately 3.7 billion IP addresses.  The world faces a shortage of IPv4 addresses.  In order to support billions upon billions of smart-objects the IPv6 protocol must be used.  There are approximately 3.4×1038 IPv6 addresses (Telstra, 2003).
Power technology will need to be further developed as well.  People currently manually re-charge batteries for their internet connected gadgets.  Other Internet connected gadgets are directly powered by an AC outlet.  IoT smart-objects will be too numerous and too small to manually supply power.  Apcar explains that the typical smart-object may be smaller than the tip of a pen.  If the smart-object is battery powered it must be energy efficient and a single charge may have to last for years (Apcar, 2011).

LLN

The smart-object’s physical properties limit their range and scope.  The small size and limited power will translate to wireless links of unpredictable quality (Apcar, 2011).  Routing Over Low Power and Lossy (ROLL) networks compensate for device constraints.  Low power and Lossy networks (LLNs) are interconnected by a variety of wireless technology.  LLNs have at least 5 charecteristics (IETF, 2013):
  1. LLNs operate with a hard, very small bound on state.
  2.  LLN optimize for saving energy.
  3. Unicast and Anycast.
  4.  Limited link layers with restricted frame-sizes.
  5. Efficiency versus generality.
Current routing protocols such as OSPF and IS-IS have been considered for use with LLNs; but they currently do not meet all necessary requirements (IETF, 2013).

6LoWPAN

6LoWPAN is technology that allows IPv6 communication over IEEE802.15.4 based networks.  802.15.4 defines low-rate wireless personal area networks (LR-WPAN) (Apcar, 2011).  6LoWPAN technology was chartered to design a low power and low data rate solution.  It operates on an unlicensed and international frequency band (IEEE, 2012).  It is relatively slow when compared to modern Wi-Fi.  802.11 standards can transfer Gbps of data while the best data transfer rate of 6L0WPAN is only 250 kbps (IEEE, 2011).  However, the bandwidth is sufficient to transfer text data from embedded sensors. 

CoRE

            Both LLN and 6LoWPAN are intended for the network, transport and session layers of the OSI model.  Constrained Restful Environments (CoRE) architecture provides an application protocol designed to work with smart-objects.  CoRE uses the Contrained Application Protocol (CoAP) to support a wide range of devices, transports and applications (Apcar, 2011.)  CoAP uses an embedded web transfer protocol (coap://) that is HTTP-compatible.  A simple packet header of less than 10 bytes facilitates low overhead and simplicity.  CoAP is defined for UDP communication (Shelby, 2011).

Sociological/User Implications

“The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it” (Weiser, 1991, p. 94). That quote, perhaps Weiser’s most famous, precisely encapsulates his vision of IoT. Little more than twenty years later, that vision is reality. In 2010, 12.5 billion devices were connected to the Internet, or 1.84 devices for each of the world’s then 6.8 billion inhabitants. By 2015, an estimated 25 billion devices will exist for 7.2 billion people, or 3.47 devices per person (Cisco, 2011). These numbers consider everyone on earth, including those in developing countries who may not even own a smart device, which means that many who are connected to IoT do so through multiple devices. Clearly, this technology has become part of the fabric of daily life.
With such connectivity come concerns regarding privacy. The business of tech has pushed the development beyond the rudimentary interactions of its early days to providing users the ability to nearly completely exist electronically. Naturally, questions have arisen as to who controls the data collected by the billions of sensors, “the eyes and ears embedded in the environment surrounding us?” (ITU, 2005, p. 9) The answer is murky. Quite simply, end users have no way of knowing who or what sees their personal data—they must trust the governments, businesses and all other entities that cultivate and share data through IoT to behave in an ethical way. There are also very real concerns about the security of the software, especially as it relates to personal information.
There are also genuine concerns about invasion of privacy, trust and the security of systems. Already, some RFID schemes have been halted in schools and the commercial sector because of public concerns.  RFID enabled passports have been shown to be insecure…Even now, people can be tracked through their mobile phones, credit/loyalty cards, and CCTV, but the convenience and benefits of these technologies are often seen as outweighing the concerns. This may not always be the case and policies and protections need to be put in place, especially when dealing with information about learners. (Ley, 2007, p. 76)
Indeed, there are countless examples of life with IoT raising questions about security and what the individual can expect in a newly digital age. Going forward, these questions will continue to arise as technology improves, forcing users to determine their personal balance of privacy and convenience.

Conclusion

What will Western society, and IoT, look like in the next decade? Rapidly advancing technology makes this practically impossible to determine. While the vision shaping IoT remains steady, the network itself is still being built. For those willing and able to hatch the next new ideas, great reward is possible. IoT start-up companies are proposing the solutions to problems like providing power sources for miniscule sensors and shaping the future of technology-enabled connectivity (Ackerman, 2012). 
Future Smart Energy Grids may provides fault tolerance and load balancing (similar to Internet routing).  IoT may innovate transportation to prevent vehicle collisions, eliminate drunk driving, and safely allow excessive high speeds on the interstate.  IoT has the potential to transform modern medical treatment.  As smart-objects become smaller they can act as probes that can safely detect and possible treat cancers.  All professional disciplines will benefit from the wealth of information that billions of smart-object will provide.  The era of IoT is under way.


Written by Steven Jordan on December 17th, 2012
References
Ackerman E. (2012, November 4). Could an Internet of Things Startup Be The Next Microsoft? Three Hobby Kits Hold Promise. Forbes QUBITS blog. Retrieved from http://www.forbes.com/sites/eliseackerman/2012/11/04/could-an-internet-of-things-startup-be-the-next-microsoft-three-hobby-kits-hold-promise/.
Evans D. (2011). The Internet of Things How the Next Evolution of the Internet is Changing Everything. Cisco Internet Business Solutions Group. White paper. Retrieved from http://www.cisco.com/web/about/ac79/docs/innov/IoT_IBSG_0411FINAL.pdf.
Internet Engineering Task Force. (2012)  IEEE 802.1 WPAN Task Group 4 (TG4).
Retrieved from
http://www.ieee802.org/15/pub/TG4.html.
Internet Engineering Task Force. (2013) Routing Over Low power and Lossy networks(roll).
Retrieved from :
http://datatracker.ietf.org/wg/roll/charter/
International Telecommunications Union. (2005). ITU Internet Reports 2005 Executive Summary The Internet of Things. Retrieved from http://www.itu.int/dms_pub/itu-s/opb/pol/S-POL-IR.IT-2005-SUM-PDF-E.pdf.
Ley D. (2007). Ubiquitous Computing. In Emerging Technologies for Learning, Volume 2 (chapter 6). Retrieved from http://www.pgce.soton.ac.uk/ict/NewPGCE/PDFs/emerging_technologies07_chapter6.pdf.
Shelby, Z. (2011). Smart Objects Tutorial, IETF-80
Retrieved from
http://6lowpan.net/wp-content/uploads/2011/03/Shelby-core-tutorial-v2.ppt.pdf
Telstra, G. (2003). IPv4:  How long do we have? In The Internet Protocol Journal, Volume 6 (number 4)
Retrieved from 
http://www.cisco.com/web/about/ac123/ac147/archived_issues/ipj_6-4/ipv4.html.
Weiser, M. (1991). The Computer for the 21st Century. In Scientific American 265, Nr. 3, S. 94-101. Retrieved from http://wiki.daimi.au.dk/pca/_files/weiser-orig.pdf.




The Adventures of an IT Leader - Review 1 of 2




Review:  The Adventures of an IT Leader (2009), written by Robert D. Austin, Richard L. Nolan, and Shannon O’Donnell , is a fictional story of Jim Barton and the challenges he faced as the new CIO of the IVK Corporation.

The story began with Barton’s ascension to the head of the IT department.  IVK Corporation had recently experienced tremendous growth.  Annual sales for IVK Corporation had increased from $41 Million, to $233 Million, over a 3 year period.  Sales however, had leveled off, and growth was flat compared to the previous year.  The board of directors had let go of IVK’s previous CEO, and hired the new CEO, Carl Williams.
Williams decided to shake things up and he reorganized roles within the company.  Williams fired Bill Davies, the IT manager, and offered the new position to Barton.  The news came as shock to Barton because he had no technical background; although he had a great reputation has the head of the Loan Operations department.

Besides Barton’s reputation a good manager, he was chosen because he was vocal in identifying problems with the IT department at executive meetings.  Williams was also looking for leadership qualities that were the exact opposite of Davies.  Davies was very knowledgeable regarding computer networks but he seemed overwhelmed and was a poor communicator.

Because of Barton’s limited IT experience, he began a knowledge quest, to quickly get up to speed on everything related IT.  Barton sought answers from his girlfriend, Maggie Landis, who also happened to be a management consultant.  She suggested that Barton start by meeting with leaders from the other business units, and ask them how their business needs were being met by IT.  Barton also befriended “the kid”, a wise twenty something year old, at a bar near his condo.  The kid’s first advice was to quickly identify the best talent and a cryptic, “know what you don’t know“.   By strange coincidence, Barton also ran into Davies, during a jog.   Davies told Barton that no one could manage the mad house and left him with a parting, “You’ll be gone in under a year”.

When Barton first began his role as CIO he held a 5 minute meeting with the IT managers.  He requested that they formally schedule a management meeting to set direction for the department.  The managers requested additional staff to help with discussions but Barton preferred it be management only.  Barton wondered why they had to lean on their “tech-nerd sidekicks” and worried his management staff was not qualified.

Throughout the story Barton discovered that IT was very complex and that it was nearly impossible to be a specialist in every field.  At one point Barton went to the books store and bought $1,200 worth of IT books to get up to speed.  He read throughout the night and realized that IT was “complicated as hell”.  When he was head of the Loan and Operations he had the skill and knowledge to do the job of any employee that worked in his department.  In contrast, an IT manager could not possibly master every technology at IVK.  The Exchange specialist may not know anything about .Net programming.  The programmers may not know anything about routers and switches, etc…   Barton’s managers would also learn that not every IT meeting involved all things technical.

In the course of Barton’s duties he had been asked to participate in a leadership meeting by the CEO.  Each department was asked about spending and the bottom line.  Barton had to justify the cost of the IT department.  Through the process he discovered the IT department was funded through a complicated charge-back system that subjectively billed the other departments for their IT related services.  In addition IT was generally viewed as an unwelcome expense from the other business units.  Barton also found that because of IVK’s rapid period of growth, they were forced to provide the same services to more and more customers with the same limited resources.   Barton’s goal was to demonstrate the value of IT.

Landis introduced Barton to competing philosophies regarding IT value.  She suggested an article called, “IT Doesn't Matter” by Nicolas Carr.  Carr argued that IT investment did not create value.  IT was a commodity and did not provide a competitive advantage.   However, MIT professor, Erick Brynjolfsson argued IT created value in a number of ways:
  • Firms that invested heavily in IT deployed business processes quickly
  • Competition was able to copy new innovations quickly.
  • Firms that invested heavily in IT had increased market share.
Barton also learned from one of his managers, Bernie Ruben, the concept of “competes” versus “qualifiers”.  If IT was a race, the “qualifier” was something that needed to be done, simply to participate in the event.  The “compete” was an innovation that helped win the race.  Barton thought if he could document each IT service as either a compete or a qualifier, he may be able to demonstrate IT’s value.


Ruben also touched on service-oriented architectures (SOA) and business intelligence (BI) concepts.  SOA is a foundation that is used to combine different application services that can be combined to deliver new functionality.  Web based portals that support business objectives can be considered SOA.

Business Intelligence involves data mining.  BI analyzes all available data, such as customers, sales, completion, and identifies trends.   Landis gave an example of BI from a Spanish company called Zara.  The clothing retailer used its point-of-sale data to identify their hottest selling items.  The process automated the order process and allowed them to re-supply the best-selling inventory before their competition.

Accompanying Barton’s research on the value of IT was the role of project management within the IT department.   Barton had to figure out how to plan for the unexpected.  Barton discovered an article by Jim Highsmith, called “Agile Project Management:  Principles and Tools”.  He learned that APM teams expect the initial plan to be wrong.  Prototypes or quick release of specific features is important with APM.

Barton also learned from “the kid” about the book The Death March, by Ed Yourdon.  A death march is a bad project managers are determined to stick to.  After Barton took charge he discovered a runaway project that IVK had already invested over $3 Million into.  The project was sold and run by an outside consultant.  It was initially pushed from a business unit over the objections of the IT staff.  Rather than waste further company resources Barton decided to kill the project.

Reflections:
I am impressed with the relevance of the content.  I was able to identify real life situations from my work experience in every chapter.  After the first couple chapters I thought the authors had specifically written about the previous company I worked for.  I had been hired at the tail end of their explosive growth.  Just like IVK, they went from a $10 Million company to a $100 Million company in a 3 year period.

When I was first brought on board the IT department was too informal.  The IT manager was “computer” smart and hands on.  I found my manager was similar to the Davies character in the book.  He enjoyed working with every technology, and because of that, he liked to micro manage every project.  As a result IT was always behind on help desk requests and big ticket projects.  We did good work but I don’t think we kept up with the needs of all the business departments.

Similar to the situation in the book, growth had leveled off, and the IT relationship with the other business departments suffered.  Directors looked at IT as very expensive and they did not understand the multitude of services we had supported.  In addition, the IT budget and staffing, had not kept up with the growth of the other business departments.   We were expected to serve 800 additional network users with the same resources as before the growth.

My original IT manager was let go and a new IT “Director” was brought on.  By the time I left the company I had seen a big difference in the level of service we offered.  My new boss had implemented a series of changes, similar to how Barton had.   My new manager was quick to identify our strengths and delegated responsibility; that resulted in quick workflow and turnaround.

The book’s CEO, Williams, also advocated for an IT leader with business experience instead of one with only technical knowledge.  When my boss had first been hired I remember thinking he had an acute lack of technical knowledge.  I wasn't sure what to expect but I didn't think it was a good thing.  After my experience working with him I found my initial reaction was wrong.  His technical skills were stronger than I first suspected; more important was his managerial experience helped the company move forward.

After reading the first half of The Adventures of an IT Leader, I found comfort knowing the situation I experienced was not unique to my previous employer.  The advice and examples from the book are based on first-hand experience of someone that had worked for many IT departments.  It should also be used as a wake-up call to all IT professionals; raw technical skills will not cut it in the modern business environment.   It is just as important to consider other ICT philosophies such as the organization, management, end users, and communication lines.  

The second part to this review can be found at:
http://www.stevenjordan.net/2012/08/the-adventures-of-it-manger-review-part.html

P.S.  This post has been surprisingly popular.  Please leave a comment if you found it helpful.
Thanks!  -SMJ

References:
Robert A., Nolan, A., O’Donnell, S., (2009).  The Adventures of an IT Leader.  Boston Massachusetts: Harvard Business Press.

Last updated  July 16th, 2013 by Steven Jordan

WAN File Server Problems - SMB Limitations Over the VPN, Internet, WAN...

Abstract:  

   This research examines the limitations of SMB file transfers over the WAN.  End users complain of slow file browsing, slow file enumeration, and an inability to save Word files from the branch office.  Recommendations are made to resolve the issues.  


WAN File Services:The Influences of latency and protocols over the WAN

By Steven M. Jordan
University Wisconsin StoutLast updated  November 20th, 2013

Chapter 1:  Introduction

This research is based on a network problem between a corporate office and a branch office.  The corporate office is based in Oconomowoc, WI, and is referred to as ORP.  The branch office is based in Carmel, IN, and is referred to as IDTC.  The ORP corporate office has an in-house datacenter that provides network connectivity to over 100 branch offices throughout the Midwest. 

Scope:

The network is modeled on a hub-and-spoke design.  The ORP datacenter is the central hub and facilitates all network services to the separate branch offices.  Provided network services include terminal, email, and file sharing via Window 2003 servers.  The primary methods used to connect branch offices include Internet VPNs (virtual private networks), T1 circuits, and DOCSIS (data over cable).  Approximately 10% of IDTC staff access the ORP file server from Windows-based workstations.  The remaining 90% of IDTC network users connect to the ORP file server from thin clients (simple computers) via Citrix Terminal Server.

Problem Statement:

Network users experienced difficulty connecting to the remote ORP file server.  The network problem caused work loss and disruption for staff located at the IDTC branch office.  Reports of the problem were sporadic and the exact cause remained unidentified.  The problem was not experienced by all network users at IDTC.  The end users that had experienced the problem complained of slow network file browsing and slow file enumeration. 

The problem was most symptomatic when large file directories on the ORP file server were accessed.  The directories associated with disruption usually contained hundreds of files and folders.  Workstations became unresponsive, mouse icons displayed an hourglass symbol, and several minutes passed before user functionality retuned.  There were also reports of mapped network drives that had disappeared or error messages that read, “Network location is unavailable.” 

Chapter 2:  Problem Determination:


            Network tests were required to diagnose potential problems.  The research targeted three potential problems:
1.      Network latency
2.      Bandwidth
3.      Network protocols 

Latency: 

            Latency is the delay of data flow.  A ping test measures the amount of time data requires to travel across the network.  Lower ping replies indicate faster network connections.  Conversely, high ping times indicate slow network connections.  The ping test between IDTC and ORP measured an average time of 25 ms.  A rate of 25 ms is usually considered sufficient to support network service for a remote office.

            Test results between IDTC and ORP were then compared to latency data from a separate branch office.  The second branch office is referred to as ODTC, and is located in Summit, WI.  The wide area network (WAN) technology is similar at each location.  The major difference between ODTC and IDTC is their geographic distance from ORP.  IDTC is located in Indiana while ODTC is located within 10 miles of ORP (Jordan, 2011, p. 5)

          The second latency test revealed a discrepancy.  Latency to ODTC averaged 1 ms; Latency to IDTC averaged 25 ms.  IDTC experienced the highest latency to the datacenter among all branch offices connected with dedicated circuits to ORP (Jordan, 2011, p. 5).

          Time Warner Cable (TWC) provides network connections to each office.  TWC’s network engineer says that the higher latency is most likely caused from the large geographic distance between ORP and IDTC.  TWC also notes that they partner with a separate telecommunications company (Telco) to provide service across state lines.  There was no method to determine the number of switches the data passed through to complete the connection; each hop (switch) slightly increased the latency (Jordan, 2011, p. 6).

     Microsoft confirms that latency has a negative impact on network performance.  The following table reports the estimated time to enumerate file share content based on available bandwidth, latency, and the volume of content (Microsoft, 2009):


Table 1
File Crawl Rates
Bandwidth
1 GB
5 GB
25 GB
100 GB
500 GB
10 Mbps
Latency = none
Crawl rate = 467 MB/min
12 sec
1 min
5 min
20 min
1 hr 30 min
10 Mbps
Latency = 100 ms
Crawl rate = 330 MB/min
2 min
9 min
45 min
3 hr
15 hr
(Jordan, 2011, p. 7)

Average file crawl enumeration of the ORP file server was tested from both IDTC and ODTC:
Table 2
Branch Office Latency
Location
Latency
Enumeration:
IDTC
23 ms
130 sec
ODTC
1 ms
3 sec
(Jordan, 2011, p. 7)
Microsoft’s published file crawl rates apply to measurements collected between ORP and IDTC. The research demonstrates negative impact when 75 MB of content volume is processed:
Table 3
Critical Mass Data
Data Size Reference
1 GB = 1,000 MB
File Crawl Formula
File Crawl Formula Applied to IDTC
Expected File Crawl at IDTC *
23 ms = 75 MB
Note.
*Calculations are based on 10Mb of available bandwidth.

Bandwidth:  

Latency measures the rate at which data is delivered.  Bandwidth indicates the amount of data that can be delivered.  Throughput is the specific amount of data delivered.  Throughput is impacted by variables, including the slowest-speed link and external interference (Odem & Knott, 2006). Data throughput and available bandwidth were tested between IDTC and ORP.
Network traffic was generated to compare the throughput rates between IDTC to ORP and ODTC to ORP.  The traffic was generated from the ORP datacenter and transmitted to the print servers at each branch office.  The inbound and outbound bandwidth results were similar up to 6 Mbps.  There was a noticeable performance difference between the branch offices when more than 6 Mbps of data traffic were delivered (Jordan, 2011, p. 8).
Traffic flow to ODTC worked as expected.  Up to 10 Mbps transfers were completed between ODTC and ORP.  The IDTC transfer rates significantly decreased when more than 6 Mbps of data delivery were attempted.  When excess of 6 Mbps of data were sent, the average latency doubled, and the outbound rate decreased from 6 Mbps to under 2Kbps (slower than an analog modem).
Table 4
Data sent from ORP to IDTC
% of 10 Mb
Data Sent
Receive
Transmit
Latency
30%
3043 Kbps
2.85 Mb
2.85 Mb
25
40%
4004 Kbps
5.4 Mb
5.4 Mb
31
60%
5,916 Kbps
2,028 bps
1,313 bps
80 ms
75%
7,523 Kbps
Time Out
Time Out
Time Out
(Jordan, 2011, p. 8)


          TWC provided network tests between IDTC and ORP to confirm previous results.  The first tests indicated possible network problems.  Packets sent back and forth experienced poor throughput and latency.  TWC repeated the network tests the following day and reached the opposite conclusion.  It was their belief that the first tests were therefore inaccurate.  Actual test results from TWC were inconclusive because of the relay to the second Telco in Indiana.  Because duplex mismatch was still suspected, the routers were replaced at both ORP and IDTC.  Both sites were then able to send and receive a full 10 Mbps of data without issue.  After the network throughput problem was resolved, staff at IDTC continued to experience the original file server problem (Jordan, 2011, p. 11).

Protocols:      

User accounts of the problem were subjective.  Further tests were required to identify the exact problem.  The first tests were conducted on the ORP local area network (LAN).  An ORP workstation was used to connect to the ORP file server.  The results were positive; more than 200 files populated in less than one second.

The second test connected an IDTC thin client with the Citrix terminal server located at ORP.  At ORP, terminal services host simultaneous client sessions and provide individual Windows desktops.  Applications operate entirely from the server.  Only keystrokes, mouse movements, and display data is exchanged between the thin clients and terminal server (Microsoft, 2003).  When connected via terminal server, the network resources are considered local to the ORP LAN.  Test results confirmed that network problems were absent from terminal server sessions.  File server directories populated information in less than one second.


The third test connected an IDTC workstation to the ORP file server.  Windows Explorer was used to browse to remote directories at ORP.  It took more than five minutes for all of the files to populate.  The same test was applied a second time while a network sniffer examined the traffic.  (A network sniffer is a software utility that is used to troubleshoot network problems.)  The network sniffer logged the data conversation between the two endpoints.  The log results documented SMB as the primary network protocol.  SMB is used for file sharing and network printing.  Most documents and spreadsheets that reside on file servers depend on SMB for delivery to client endpoints (MSDN, 2012).  

 Logs found that the SMB protocol sent duplicate data across the WAN when a workstation at IDTC connected to the file server at ORP.  In some instances, the same file information was delivered as many as 14 consecutive times.  The process stopped before data fully enumerated and the process repeated itself.  The SMB transmission process was stuck in a repetitive loop that resulted in poor network performance and slow enumeration times.

Further research confirmed the inherent limitations of SMB and file server performance over a wide area connection.  Vinodh Dorairajan is credited with coining the term WAFS (Wide Area File Systems).  According to Dorairajan, “file sharing protocols tend to be rather chatty” (Dorairajan, 2004).  CIFS (SMB) protocols were designed to work well in a LAN environment but will not work well over the WAN (Jordan, 2011, p. 9).
In most situations, additional bandwidth will not resolve problems inherent to SMB over geographic distance.  Copying a single large file may transfer well over the WAN but folder enumeration may be considerably slower (Microsoft, 2008).  In this case, a single 1 GB file was used to test this theory.  The single file successfully transferred in less than seven minutes.  The IDTC network problem mostly occurred while file browsing directories over the WAN.  IDTC throughput had increased by 40% and end users continued to experience the same problems.

Chapter 3:  Available Technology


Research identified four potential technology solutions to address the SMB problem.

WAFS (Wide Area File Servers):

WAFS are specialized servers that are designed to overcome traditional network limitations when data is sent over a WAN.  WAFS increase network efficiency with a combination of data compression and IP spoofing.
IP spoofing is a process normally used by network hackers as a method to gain unauthorized network access.  Each network packet contains a source IP address and a destination IP address.  Routers normally use the destination address to forward data and ignore the source address.  Hackers manipulate the IP packet and fool the remote computer into believing the data was sent from a trusted source (Velasco, 2000).   
WAFS use similar techniques to increase the amount of data sent and received over the WAN.  WAFS use IP spoofing to change the MTU (maximum transmission unit).  MTU dictate the maximum amount of data that can be transferred per packet.  If the packet is larger than 1,500 bytes it is normally fragmented into smaller packets for transmission (Seifert, 2000).  The additional fragmented packets create network delay.  WAFS overcome the MTU limits by manipulating IP packet headers.  More data is delivered than what traditional MTU standards allow (Citrix, 2012).

WAFS optimization was considered a solution to the SMB problem.  Cisco, Citrix, and Riverbed each offered WAFS appliances but they were all considered too expensive.  The WAFS appliances were cost prohibitive because IDTC had already invested substantial resources for their WAN.  A minimum $50,000 was not available in the current budget.  The network staff decided to review additional technology.

DFS (Distributed File System): 

DFS consolidates file services for the end-user.  Its primary purpose allows multiple file servers to serve data from a single UNC (uniform naming convention).  A UNC is similar to a URL (uniform resource locator).  A URL is a web address and a UNC is a file server address.
When multiple file servers are used (without DFS), the end users must access the respective servers from separate UNCs.  For example:
            \\fileserver1\data
            \\fileserver2\data
When DFS is implemented, the data from multiple servers can be accessed from a single domain file share.  The same paths from the previous example can be consolidated into a single UNC.  For example:
\\uwstout.edu\data\
DFS divides file services between the branch office and the data center.  This solution requires a file server located at both IDTC and ORP.  IDTC related content can be hosted in Indiana and all other content will remain in Wisconsin.  A single UNC will present the separate file servers as a single system.  This method does not resolve the SMB related problems but it helps because it makes IDTC less dependent on the file server at ORP.

DFSR (Distributed File Services Replication):

DFSR provides file server redundancy through replication.  Data located on the first file server can be synchronized with the second file server.  If one file server becomes unavailable the UNC serves files from the second file server.  The process is seamless from the end user’s perspective.
            Replication can also be used to mask SMB-related problems.  This solution requires a file server located at both IDTC and ORP.  DFSR will copy files between the two servers.  Any insertions, removals, and rearrangements of data within the files will be replicated (MSDN, 2012).
            DFSR will not resolve the SMB problem but it provides a functioning alternative.  IDTC staff will not experience delay or enumeration problems with the file server on their LAN.  The drawback to DFSR is the lack of geographic file locking.  A single Windows server prevents multiple users from editing a single file at the same time.  With replicated data on separate servers it becomes possible for users from IDTC and ORP to overwrite each other’s changes (Pyle, 2009).  This dilemma can be limited but not fully eliminated.  The IT staff believed DFSR had potential but they were hesitant to introduce a separate problem.

BranchCache:

            Microsoft BranchCache is a Windows service designed to increase application performance and reduce WAN traffic when accessed from branch offices.  BranchCache stores local copies of remote files.  BrancheCache only retrieves data when clients request it over the WAN.  The cached files are stored on local workstations or servers.  When clients from within the LAN request a cached file, the client downloads it from the cache, instead of the remote server across the WAN (Microsoft, 2009).  BrancheCache was considered more favorable than DFSR because it addressed the geographic file locking limitations (Microsoft, 2008) .
Implementation:                                       

The IT staff decided to implement the BrancheCache technology because of its simple implementation and affordability.  The service was packaged with Windows as an installable feature.  The IT staff passed on WAFS optimization because of the additional expense.  DFS was not chosen because it did not specifically address the limitation of SMB over the WAN.  DFSR could work but it also introduced additional risk. 
BranchCache was only available with Windows 2008 and the file server ran on Windows 2003.  Licensing for Windows 2008 had previously been purchased and upgrades were already planned.  The BranchCache project expedited the server upgrade.  The ORP file server was the first server to be upgraded to the Windows 2008 platform.
The file server upgrade required minimal downtime because the system volume (operating system) was kept separate from the data volume.  The 2003 file server was shut down and the system volume was removed.  A pre-built volume configured with Windows 2008 was then paired with the original data volume.
File service tests were run before BranchCache was installed on the Windows 2008 server.   File browsing was performed between an IDTC Windows 2003 print server and the new ORP file server.  Directory browsing and file enumeration were slow.

A second test was conducted from a separate computer in Indiana.  IT staff accessed a Windows Vista workstation and repeated the enumeration process with the file server in Wisconsin.  The second test had different results.  Directories populated in less than two seconds.  The installation of BranchCache was put on hold to allow further research of the new development.

SMB2:

The Windows 2008 file server improved file services over the WAN.  It was later discovered that Windows 2008 included an improved version of SMB.  SMB2 had significant improvements to allow for fast folder enumerations and file copying over connections with high latency.  In order to use SMB2 both the client and the server must support the protocol (Barreto, 2008).  File services performed poorly from the IDTC Windows 20003 print server because the older SMB protocol was used.  Enumeration tests were quick from the IDTC Windows Vista workstation because it ran SMB2.  The Windows 2008 file server performed best with SMB2. 

Additional workstations were tested to ensure quick enumeration and file transfers over the WAN.  All workstations at IDTC had Windows Vista or Windows 7 operating systems installed.  SMB2 resolved the file service problems caused from high latency between IDTC and ORP.  An unintended consequence resolved the network problem.  BranchCache was no longer needed because file services worked as expected.

Chapter 4:  Future Innovations:

IT staff continued to monitor the network traffic between IDTC and ORP in the weeks that followed.  IDTC staff members with workstation access were also contacted to confirm service was working satisfactory.  After four weeks the specific problem was considered resolved.  After the trouble ticket was closed the IT staff at ORP continued to search for innovations to improve the network file services to the IDTC branch office. 

QoS (Quality of Service): 


 SMB2 allows for quick and efficient data transfer over the WAN.  A comparison between SMB and SMB2 revealed data can transfer up to six times faster over a high-latency network (Barreto, 2008).  More data is delivered within a shorter time frame.  The increased efficiency presents a potential new problem, however.  Most WANs have fixed bandwidth and can only deliver a limited amount of data at any given time.  Too much data transferred at once may saturate the connection and cause congestion.


Network congestion can be relieved with purchase of additional bandwidth.  The larger pipeline allows for greater volume delivery.  There are instances when additional bandwidth cannot be purchased because of physical or cost limitations.  Leased line fees increase proportionately with increased bandwidth and geographic distance.  The IDTC budget does not allow for additional bandwidth.

Network congestion may repeat the original content crawl problems at IDTC.  Congestion results from common network activities include web surfing, video streaming, and file services.  Windows 2008 allows QoS policy to identify and prioritize specific traffic (Davies, 2006).  QoS tags SMB2 traffic and avoids service disruption during periods of high activity.  File enumeration will work well but at a potential cost of other network serveries (e.g. slow web surfing).  The trade-off is acceptable because IDTC places higher value on file services over web surfing.

SharePoint:

Windows file servers are usually accessed from the Windows desktop environment.  Windows Explorer is used to manually browse through multiple directories to store and retrieve files.  This solution does not always scale well across the WAN.  File management is usually administered by the IT Staff.  Management responsibilities include file directory organization and security. Files can quickly become outdated, unorganized, and unsecured because of management constraints.

Microsoft SharePoint is a document management and collaboration server that addresses some limitations of traditional file servers.  Employees can access SharePoint with a web browser and website address.  SharePoint server eliminates WAN-related file enumeration problems because web browsers do not use the SMB protocol.  SharePoint provides additional benefits, including meta-tags and delegation.  Meta-tags are document keywords that allow for robust search capabilities.  The ability to search with key words is more efficient than manually browsing.  Delegation allows different departments to self-manage their content.  Each business unit can assign staff permissions and make directory changes.  

SMB3: 

Microsoft released Windows Server 2012 and SMB3 in May 2012.  SMB3 was enhanced to further improve network performance.  SMB Directory Leasing is a subset function of SMB3 that enables clients to cache directory and meta-data.  The local directory caching reduces round-trip protocol traffic from the file server (Snover, 2012).  SMB3 satisfies all file server requirements for IDTC because file enumerations are faster and bandwidth requirements are reduced.

BranchCacheV2:

            Windows 2012 Server also introduces an enhanced BranchCache.  BranchCache V2 takes advantage of SMB3 improvements.  BranchCache V2 reduces CPU cycles to reduce server resource load.  Reduced WAN traffic and storage requirements are achieved because duplicate data is stored and downloaded only once per branch office.  Only small changes made to a large file are delivered and cached.  The services also divide files into smaller units through hash algorithms for further bandwidth savings.  Both SMB3 and BranchCache deliver and store data with encryption. 

Chapter 5:  Conclusion

A combination of latency and outdated SMB protocol caused network disruption for the staff at IDTC.  Research identified the problem with network tests for latency, bandwidth, and protocol performance.  After the problem was defined, existing technology was examined to determine a resolution.  WAFS, DFS, DFSR, and BranchCache were considered potential candidates.  The fileserver was upgraded to Windows 2008 as preparation for BranchCache installation.  Before BranchCache was installed on the server, it was discovered that file services over the WAN experienced improved performance.  Additional research found the improved performance was a result from an enhanced SMB2 protocol.  Although the network problem between IDTC and ORP was resolved, continued research identified technology that could prevent potential problems and provide additional benefits.   

References:




Barreto, J. (2008, November 11). File Server performance improvements with the SMB2 protocol in Windows Server 2008. Retrieved October 2012, from TechNet: http://blogs.technet.com/b/josebda/archive/2008/11/11/file-server-performance-improvements-with-the-smb2-protocol-in-windows-server-2008.aspx

Citrix. (2012, October). How Branch Repeater Works. Retrieved October 2011, from Citrix: http://www.citrix.com/English/ps2/products/feature.asp?contentID=1686852

Davies, J. (2006, March). Policy-based QoS Architecture in Windows Server 2008 and Windows Vista. Retrieved from Microsoft TechNet: http://technet.microsoft.com/library/bb878009

Dorairajan, V. (2004, May 25). Enabling File Sharing Over the WAN. Retrieved October 2012, from Electrical Engineer Times: http://www.eetimes.com/electronics-news/4144653/Enabling-File-Sharing-over-the-WAN

Jordan, S. (2011). ICT & SMB2. (Unpublished research from ICT-701). Meonomonee, WI: UW-Stout.

Microsoft. (2003, MArch 28). Remote Access Technologies. Retrieved October 2012, from Microsoft Technet: http://technet.microsoft.com/en-us/library/cc755399(v=ws.10).aspx

Microsoft. (2008, February). Branch Office Infrastructure Solution Architecture Guide. Retrieved October 2012, from Microsoft Branch Office Tech Center: http://download.microsoft.com/download/4/2/e/42e8ee6e-5365-4e79-b3bf-b10fdac3170e/BOIS%20Architecture%20Guide.docx

Microsoft. (2008, August). Optimizing Applications for Remote File Access Over WAN. Retrieved October 2012, from PDC Microsoft Professional Developers Conference: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&ved=0CCgQFjAB&url=http%3A%2F%2Fdownload.microsoft.com%2Fdownload%2Ff%2F2%2F1%2Ff2146213-4ac0-4c50-b69a-12428ff0b077%2FOptimizing_Applications_for_Remote_File_Access_Over_WAN.pptx&ei=a8

Microsoft. (2009, January). BrancheCache Executive Overview. Retrieved October 2012, from Microsoft: http://www.microsoft.com/en-us/download/confirmation.aspx?id=4606

Microsoft. (2009, April 23). Plan for Bandwidth Requirements. Retrieved October 2012, from Microsoft: http://technet.microsoft.com/en-us/library/cc262952(office.12).aspx#section3

MSDN. (2012, October 16). DFSR Overview. Retrieved October 2012, from Microsoft Developer Network : http://msdn.microsoft.com/en-us/library/windows/desktop/bb540025(v=vs.85).aspx

MSDN. (2012, September 7). Microsoft SMB Protocol and CIFS Protocol Overview. Retrieved October 2012, from Microsoft Developer Network: http://msdn.microsoft.com/en-us/library/windows/desktop/aa365233(v=vs.85).aspx

Odem, W., & Knott, T. (2006). Networking Basics. Indianapolis: Cisco Press.

Pyle, N. (2009, February 20). Understanding (the Lack of) Distributed File Locking in DFSR. Retrieved October 2012, from Ask the Directory Services Team: http://blogs.technet.com/b/askds/archive/2009/02/20/understanding-the-lack-of-distributed-file-locking-in-dfsr.aspx

Seifert, R. (2000). The Switch Book. New York: John Wiley & Sons, Inc.

Snover, J. (2012, April 19). SMB 2.2 is now SMB 3.0. Retrieved from Microsoft Windows Server Blog: http://blogs.technet.com/b/windowsserver/archive/2012/04/19/smb-2-2-is-now-smb-3-0.aspx

Velasco, V. (2000, November 21). Introduction to IP Spoofing. Retrieved October 2012, from SANS Institute: http://www.sans.org/reading_room/whitepapers/threats/introduction-ip-spoofing_959