Thursday, January 22, 2015

Tuesday, January 20, 2015

West African Symposium on Technology, Science, Sustainability, and Computing, 2015

TSSC 2015 : West African Symposium on Technology, Science, Sustainability, and Computing, 2015

FacebookTwitterLinkedInGoogle

Link: http://tssc2015.yolasite.com/
 
WhenMar 23, 2015 - Mar 24, 2015
WhereSerrekunda, The Gambia
Abstract Registration DueMar 13, 2015
Submission DeadlineMar 20, 2015
Notification DueMar 30, 2015
Final Version DueApr 30, 2015
Categories    computer   information systems   technology   science
 

Call For Papers

AIMS AND SCOPE

The West African Symposium on Technology, Science, Sustainability, and Computing, TSSC 2015, is devoted to reviewing current achievements and trends that have the ability to affect Africa and other developing nations. The main purpose of the international symposium is to discuss in a unique and collaborative setting a broad range of topics.

The quality of presentations is very high and lively discussions take place during the different sessions between computer scientists, technologists, and drivers of sustainability, corroborating the multidisciplinary of the research activities around micro/nanotechnologies for sensors and devices. Past conference success is due to the help of many people, groups, and institutions.

The symposium will include a few 40-minute general review talks to introduce the current problems, and 20-minute talks to discuss new experimental and theoretical results. A series of 15-minute talks discuss the ongoing student research from students in the West African region. There are also some general talks about the future directions of scientific research on cyber security, sustainable design, laws, policies, and computing. Additionally, there will be hands on workshops that everyone can participate in.

For those that participate virtually there will be recorded sessions on Google Talk that will be archived online for all to view.


TOPIC AREAS

Potential topics include, but are not limited to: 1) New trends in computing, 2) information systems, and 3) other technological advances Systems design Innovations in science Open source software and hardware, and 4) Sustainability Educational trends in the fields of science and technology. For students they may submit on any technical subject matter area however high preference will be given to those in computing. The subject focus should be on science and technology in emerging nations. If you have questions concerning your topic area email us at dawsonmau@umsl.edu.

SUBMISSION DATES
Manuscript Due - March 20, 2015
First Round of Reviews - March 30, 2015
Publication Date - June, 2015

23-24 March 2015, Serrekunda, The Gambia at The University of The Gambia Faculty of Law Building
2-5 April 2015, VIRTUAL

PUBLICATION OUTLETS

Proceedings will be published and authors will be invited to submit to an IGI book as the author of a chapter.

ASSOCIATED JOURNAL

Best papers will be invited to submit an extended paper to the International Journal of Strategic Information Technology and Applications (IJSITA). The International Journal of Strategic Information Technology and Applications (IJSITA) provides state-of-the-art research on the optimization of performance in corporations, groups, associations, communities of practice, community organizations, governments, non-profits, nations, and societies that implement information systems. This journal covers analysis and avoidance of risk, detection and prevention of problems, acquisition and management of knowledge, preparation and response to emergencies, enhancement of decision making, facilitation of collaborative efforts, and incremental organizational wisdom.

Friday, January 2, 2015

Dawson, M., Leonard, B., & Rahim, E. (2015). Advances in Technology Project Management: Review of Open Source Software Integration. In M. Wadhwa, & A. Harper (Eds.) Technology, Innovation, and Enterprise Transformation (pp. 313-324). Hershey, PA: Business Science Reference. doi:10.4018/978-1-4666-6473-9.ch016

Dawson, M., Leonard, B., & Rahim, E. (2015). Advances in Technology Project Management: Review of Open Source Software Integration. In M. Wadhwa, & A. Harper (Eds.) Technology, Innovation, and Enterprise Transformation (pp. 313-324). Hershey, PA: Business Science Reference. doi:10.4018/978-1-4666-6473-9.ch016

Advances in Technology Project Management: Review of Open Source Software Integration

Maurice Dawson, University of Missouri – St. Louis, USA
Brian Leonard, Alabama A&M University, USA
Emad Rahim, Oklahoma State University, USA

TopABSTRACT
As organizations must continually drive down costs of software-driven projects, they need to evaluate the Systems Development Life Cycle (SDLC) and other software-based design methodologies. These methodologies include looking at software-based alternatives that could save a significant amount of money by reducing the amount of proprietary software. This chapter explores the use and integration of Open Source Software (OSS) in software-driven projects to include in enterprise organizations. Additionally, the legalities of the GNU General Public License (GPL), Lesser General Public License (LGPL), Berkeley Software Distribution (BSD), and Creative Commons are explored with the integration of these OSS solutions into organizations. Lastly, the chapter covers the software assurance and cyber security controls to associate with OSS to deploy a hardened product that meets the needs of today’s dynamically evolving global business enterprise.

TopAPPROACH
The authors reviewed multiple Linux distributions and their uses. Reviewed in depth were the copyrights and open sourcing legal implications.
TopIMPRESSION
As indicated through legal case reviews, there are some very valuable benefits to open source software, in that it allows for collaboration in the development of new software and technology that can undoubtedly spur innovation and improve many processes and functions that individuals and businesses in our society rely on every day. Consequently one purpose of the GNU GPL is to protect and preserve individual rights and the creativity of others while at the same time providing a benefit and contributing to society at large. OSS must be considered in the development process as it is essential in overall license cost reduction with the ability to reuse already constructed software.
TopPROJECT MANAGEMENT
There are numerous perspectives regarding the concept of project management as this is a field with many employment opportunities in various industries such as defense or aerospace (Dawson & Rahim, 2011). Thus, the definitions generated by these perspectives also vary, according to the context in which it is discussed. However, the purpose of most project management activities is generally similar. Project management is a way of managing and organizing corporate resources so the available resources can generate the completion of a project within given scope, time, and resource constraints (Wideman, 2001).
The understanding behind project management also accounts for the definition of a project. A project is a unique endeavor performed to create certain products, services, or results (Project Management Institute, 2009). This definition is dissimilar to the definitions of process and operation due to several factors. The easiest to define is the time-constraint factor. A project performs the work necessary to complete activities within a limited amount of time, while processes and operations generally account for on-going continuous effort. A project aims to produce a single or a group of products, services, or results and the chain of activities are terminated once these are produced. Thus it is important to understand the acquisition of Information Technology (IT) and Information Systems (IS) in project management (Rahim & Dawson, 2010).
TopSoftware Design Methodologies
The SDLC is a process for planning, creating, testing and deploying ISs (Avison & Fitzgerald, 2003). Requirements are an impact factor as they feed the development and serve as an important prerequisite to development. The SDLC is a modified waterfall method as when objectives are not met then the process is to move backward but the goal is to continually move forward into the next process steps such as system deployment. Another design methodology is agile software development. Agile is based on iterative and incremental development, in which requirements and solution evolve through collaborating teams (Cockburn, 2002). In agile it is essential to understand the people factor to ensure success (Cockburn & Highsmith, 2001). A modified agile methodology is Scrum which is an interactive and incremental software development framework (Rising & Janoff, 2000). All methodologies described allow for code reuse and the integration of OSS. As design methodologies continue to grow so does the need for quicker development. To do this effectively one would need to consider using the option of code reuse.
TopDetails of Linux
The definition, terms, and understanding of open-sourcing have been synonymous with the World Wide Linux is an Unix like OS that is built on the Linux kernel developed by Linus Torvalds with thousands of software engineers. As of 2012 there are over two hundred active Linux distributions. The majority of the kernel and associated packages are free and OSS. This type of software provides licenses which allows users the right to use, copy, study, change, and improve the software as the source code is made available. Providing source code allows an organization’s developers or engineers to understand the inner workings of development. Imagine being able to study Mac or Windows by viewing all the source code to replicate similar developments. This exercise would be great for a new developer to learn low level coding techniques, design, integration, and implementation. Students and faculty could actively participate in design groups in which they would contribute code or design guidance for the upcoming software releases.. However some distributions require a cost for updates or assistance that related to specific needs such as OS modifications for server hosting. In software, there is a packet management system that automates the process of installing, configuring, upgrading, and removing software packages from an OS. In the Linux OS builds the most common packet management systems are Debian, Red Hat Package Manager (RPM), Knoppix, and netpkg.
Since Linux does not have redistribution limits it can be used to replace proprietary OSs in computer labs to save costs. The cost that would be associated with the proprietary labs can be redirected towards additional hardware instead. With the many variations of Linux one can find the appropriate distribution for their targeted use. Table 1 displays the different distributions to include the potential uses.

Table 1. 
Linux distributions and uses
Linux DistributionsDescription and Potential UsePacket Management System
UbuntuOne of the most popular Linux OS developed to be a complete OS that can be an easily replacement for other comparable OSs.Debian-based
EdubuntuOS targeted for grades k-12. Contained in OS are tons of software applications that is useful to those who are education majors.Debian-based
Damn Small LinuxThis OS is designed to as a small OS to be utilized on older hardware. This OS is great for institutions that have old computers and want to revitalize them for use. OS is also great for VMs as DSL requires a low amount of memoryKnoppix-based
BackTrackOS based on Ubuntu for digital forensics and penetration testing. Great tool for students majoring in technology fields. As cyber security is becoming a hot topic around the world this tool provides students the ability to learn from over thirty software applications that aid in penetration testing and more.Debian-based
Kali LinuxOS based BackTrack that is a continuation of the popular penetration testing distribution.Debian-based
Red Hat Enterprise LinuxThis OS serves as the standard for many enterprise data centers. OS was developed by Red Hat and targeted for commercial use. Red Hat has a policy against making nonfree software available for the system through supplementary distribution channels. This is different and why this OS is listed as an exception in terms of OSS.RPM-based
FedoraThis OS is supported by the Fedora Project and sponsored by Red Hat. This OS provides a great resource for learning Red Hat Enterprise Language (RHEL). As there are thousands of jobs requiring expertise specifically with Red Hat this OS is a great tool to prepare students for employment in IT. Fedora has over six Fedora Spins such as Design-suite, Scientific-KDE, Robotics, Electronic-lab, Games, and more.RPM-based
CentOSThis OS derived entirely from RHEL. The source code is developed from Red Hat which allows a student to learn RHEL with a small number of differences. CentOS can be used for teaching IT students on how to setup, administer, and secure a server.RPM-based
SUSE LinuxOS is of German origin with the majority of its development in Europe. Novell purchased the SUSE brand and trademarks.Debian-based
XubuntuXubuntu is based upon Ubuntu however it uses the light weight Xfce desktop environment.Debian-based
Ubuntu StudioThis OS is derived from Ubuntu. This OS is developed specifically for multimedia production such as audio, video, and graphics. Departments for multimedia could use this OS for multimedia instruction and the development of projects. As many of the tools for multimedia production are expensive this alleviates large license costs for institutions.Debian-based
LubuntuOS is based on Ubuntu and uses the LXDE desktop environment. It replaces Ubuntu’s Unity shell and GNOME desktop.Debian-based
Chromium OSAn open source light weight OS that is targeted for netbooks and mobile devices.Portage-based
Fedora is an OS based on the Red Hat Package Manager (.rpm) (Proffitt, 2010). Fedora has a side development project known as Fedora Spins which contains multiple spin off versions of the Fedora OS. These spins allow academics, researchers, and students the ability to perform tasks such as cyber security, forensics, electronics design, and more (Petersen, 2013). Two of the spins are lightweight distributions which are key to reviving older systems. Kitten Lightweight Kernel (LWK) and other similar kernels allow individuals the ability to practice development on lightweight OSs (Brightwell, Riesen, Underwood, Hudson, Bridges, & Zaharia, 2003). The possibilities are endless for encouraging low level development, integration, and increasing overall lifecycle expertise.
TopWhy Use Open Source
Using OSS such as Linux allows for a significant reduction in the cost of proprietary licensing. Additionally, when coupled with virtualization OS capabilities can be replicated in a virtualized layer (Dawson & Al Saeed, 2012). Much software today is too complex to be developed from scratch thus reuse adds competitiveness allowing for immediate code reuse (German, & González-Barahona, 2009). In addition, OSS provides the ability to allow developers to perform static code analysis on source code as it is readily available (Louridas, 2006). As over half of the vulnerabilities are found in the application using OSS could provide beneficial to all that are stakeholders in the SDLC (Paul, 2011). Additionally, benefits include the ability to capture all known and unknown risks that allow the use of sound software engineering practices (McGraw, 1999).
TopArgument against the Use of Open Source
One of the most well-known arguments against the use of OSS is that individuals with malicious intent can find flows within the code and exploit them (Carrier, 2002). The issue with this particular argument is that proprietary software packages or closed OSs are being exploited such as Windows. More importantly more research needs to be understood on the accessibility and human use of the different operating environments (González, Mariscal, Martínez, & Ruiz, 2007). The misconceptions surround ease of use in which individuals feel they need to have a mastery of command line among other system administrator like abilities to navigate the desktop based OSs. Thus more attention needs to be provided to system usability for industry and home use on the modern open source Linux OS such as Ubuntu 14.04 Long Term Support (LTS) and others (Brooke, 1996).
The U.S. Constitution provides that, “The Congress shall have Power… [t]o promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;…”1 A copyright gives the author, provided certain legal requirements are met, exclusive rights to distribute, sell, license, produce, and publish the copyrighted material (Cheeseman 2013). Moreover, the Copyright Revision Act of 1976, establishes the legal requirements for copyright protection and provides for protection for copyright infringement (Cheeseman 2013). In addition, in 1989, the U.S. and several other countries signed the Berne Convention, which is an international copyright treaty.
Establishing copyright protection is only half the battle. In order to prove copyright infringement, one must show that a party has copied a substantial and material part of the owner’s copyrighted work without permission (Cheeseman 2013). If successfully proven, an owner of a copyrighted work may recover profits from the infringement, damages suffered by the owner, and even a court order requiring destruction of the infringing material and/or an order preventing such infringement in the future (Cheeseman 2013).
License Agreement
In its most basic sense, a licensing agreement is a detailed agreement which indicates the terms between a licensor, the owner and/or creator of intellectual property, and a licensee, the party who his granted limited rights in or access to the intellectual property (Cheeseman 2013). A licensing agreement may be a contract, but all licensing agreements may not meet the requirements of a contract. (Stein 2006). The extent to which a licensing agreement meets the legal requirements of a contract may provide greater or lesser legal protection of the intellectual property (Stein 2006).
Traditional Contract Law
Basic contract law requires that in order for an agreement to be legally enforceable, it must contain at minimum, an offer, an acceptance, and must be supported by consideration. An offer is generally a manifestation of intent to be bound, and an acceptance requires an unequivocal assent to the terms of the offer. Consideration is generally defined as bargained for exchange, where each side of the agreement receives some legal value. Generally courts will not inquire into the adequacy of consideration, or in other words the sufficiency of consideration (Cheeseman 2013). If an agreement is a legally enforceable contract, the law provides for several remedies in the event that a party to the agreement does not fully perform the agreement or violates its terms in some way.
TopUnique Challenges of Open Source Software
Copyleft and Free of Charge
One of the major challenges of “open source software,” is the fact that the author or creator of the original source code, makes their source code available to other users to distribute and modify, free of charge, and in many cases requires that any modification of that source code must also remain free and available to other users down the line, or the concept of copyleft work (Stallman, 2013). The problem for traditional copyright law is that it has primarily been focused on protecting and restricting the use and distribution of copyrighted work rather than the free and open distribution thereof. Furthermore, the fact that the software is made available free of charge creates some problems when it comes to determining whether and to what extent the author or creator has suffered any damages, and is therefore entitled to any compensation as is usually the case in the traditional copyright infringement lawsuit.
Enforceability
Furthermore, given the usual form of the open source licenses utilized by creators and authors of open source software, there is some question as to the availability of proof that the licensee of open source software is even aware or ever assents to the terms of the license. (Stein 2006). For example, if a licensee of open source software is not aware that there is a license or what the terms of the license provide, it may be difficult, in the event that the user modifies the licensed source code and then attempts to restrict its availability to others, for the author or licensor to enforce the license against the licensee. This would not be the case for example, if the licensee was required to download the source code and in doing so had to accept the terms and conditions of the license (Stein 2006).
Lack of Consideration
Another challenge for licensors of open source software is whether and to what extent the license is actually a contract or merely a bare license (Stein 2006Mandrusiak 2010). If an open source license was considered to be more than just a bare license, but a contract, the author or creator may enjoy greater protection. (Stein 2006Mandrusiak 2010). The problem however is that in order to be considered more than just a bare license, but a contract, the license would have to meet the traditional contract law requirements, which includes consideration. In order for an open source license to be enforceable as a contract, the author or creator must show that it is supported by consideration. Thus the author or creator or licensor must be able to show some legal value provided and received for the use of the license and source code. Since many of these open source code license are made available free of charge, it may be more difficult for a licensor to make such a showing. There are other arguments that a licensor may have such as that the promise to abide by the terms of the license could be sufficient consideration, but it may be unclear whether courts would agree with this rationale. If not, then the open software license would not be considered a legally enforceable contract and therefore, would not receive the traditional contract law protections or remedies for breach.
Illustrative Cases
There have been a few cases to consider the legal protections afforded to some open source software code materials. For example, inComputer Associates International v. Quest Software, Inc., et al., the court recognized the validity of the GPL involved in that case, and specifically found that any user of the GNU GPL was bound by its terms.2 Furthermore, the court noted that no copyright protection could be afforded to the modified version of the source code provided based on the terms of the GNU GPL. However, the Court further found that where the GNUGPL provided an exception for the commercial use of the output of that program, the GNUGPL would not be violated and copyright protection may exist for that output.3
In addition in Progress Software Corporation, et al., v. Mysqlab, et al., the Court recognized and considered, but did not rule upon at that stage of the case, a GNUGPL that was at issue in that case.4 Furthermore, in Planetary Motion, Inc., v. Techplosion, Inc., Michael Gay A.K.A. Michael Carson, the Court recognized and reiterated that the GNU GPL utilized in that case, “…allows users to copy, distribute and/or modify the Software under certain restrictions, e.g., users modifying licensed files must carry “prominent notices” stating that the user changed the files and the date of any change.”5
Lastly, but certainly not least, probably the most notable case which dealt with legal protection, specifically copyright protection and open source software, is the Jacobsen, v. Katzer,et. al.6 In Jacobsen, the U.S. Court of Appeals for the Federal Circuit, considered whether and to what extent a copyright holder, could use copyright laws to enforce an open source license, with respect to software that that had been made free and available to the public. The District Court held that while the defendant’s actions may have been in breach of the nonexclusive Artistic License granted to them, they did not rise to the level of copyright infringement, and thus did not allow the copyright holder to use copyright laws to enforce the open source license.
The U.S. Court of Appeals for the Federal Circuit, on the other hand not only recognized the existence of the Artistic License, but held that despite its nonexclusivity, the Artistic License, did prevent certain other actions from being taken with regard to the source code, specifically use of the information without compliance with the Artistic License, such as indicating the source of the material, and including appropriate notices with any subsequent distribution of the material.7 The Court specifically held, that “Copyright holders who engage in open source licensing have the right to control the modification and distribution of copyrighted material.”8 The court also held that the mere fact that open source licenses like the Artistic License at issue in the case are free of charge, does not render them devoid of economic value, and does not entitle them to any less protection than other forms of copyrighted material. The Court stated, “[t]he choice to exact consideration in the form of compliance with the open source requirements of disclosure and explanation of changes, rather than as a dollar-denominated fee, is entitled to no less legal recognition.”9
Thus, the Jacobsen case appears to have provided at least one example of where a Court has provided copyright and it appears probably contract protection open source software utilizing open source licenses, in a way that some had believed was not possible given their unique nature. However it should be noted, that it is not clear whether other federal circuits will follow suit, and/or if the U.S. Supreme Court will ultimately agree with the Jacobsen Court in its analysis of this issue. As with many issues in the law, we will have to wait and see.
UCITA
In addition to the cases previously discussed, the Uniform Computer Information Transactions Act (UCITA) may also provide some legal protection to open source software and code, in the states where it has been enacted, and except where federal law controls, such as in the area of copyright law. The “…UCITA is a model act that establishes a uniform and comprehensive set of rules governing the creation, performance and enforcement of computer information transactions.” (Cheeseman 2013).
TopReview of the Specific Licenses
GNU GPL v3
After a review of the terms and conditions provided by this license it appears to be more comprehensive in its requirements for use of the licensed software. It contains several more terms and appears to contain many more prohibitions that the previous version of the license terms contained. It contains the requirement to include appropriate notices for distribution of the code. It also contains specific prohibitions regarding restriction on the subsequent use of the code, including modified versions, by downstream users (Kumar, 2006).
GNU GPL v2
After review of the terms and conditions of this license, this version’s license does not appear to have as many requirements and certainly is not as long as the newest version of this software’s license appears to be. While considerably shorter than the subsequent version’s license, this license does still maintain and include the requirement that appropriate notices accompany the distribution of the code (Kumar, 2006).
LGPLv3
After review of the terms and conditions of this license, this version’s license does not appear to have as many requirements as either of the licenses under the GNUGPLv3 or v2, but it does maintain several requirements for compliance. Of note, is this license includes an exception to the GNUGPL license, namely that the work produced under this license may be reproduced without compliance with Section 3 of the GNUGPL, which relates to Protecting Users’ Legal Rights from Anti-Circumvention Law.
LGPL v2
After review of the terms and conditions of this license, this version’s license appears to somewhat longer than the terms and conditions of the subsequent version’s license, but it appears to be closer to the GNUGPLv2’s license terms than the LGPLv3’s terms and conditions, and noticeably does not include the exception to the GNUGPL license as is contained in the subsequent version of this license.
LLGPL
After review of the Lisp Lesser General Public License (LLGPL), this version’s license is like the LGPL but with a prequel. This prequel defines the effect in terms more typically used in Lisp programs. This license is grounded in the C programming language as the license specifically calls out functions not present in other languages that are not traditionally compiled (Greenbaum, 2013).
Creative Commons
After review of the terms and conditions of this license, it appears that this license is very similar to that of Modified BSD. It is interesting of note that the license begins by indicating that the company is not a law firm. Additionally, this license appears to include a waiver of copyrights and related rights, and a fall-back in the event that the waiver is invalidated, which appears to be based upon the purpose of promoting the overall ideal of free culture. In addition this license includes a limitation to make sure that neither patent or trademark rights are being waived by this license.
Artistic License 2.0
After review of the terms and conditions of this license, this license appears to be very similar to that at issue in the Jacobsen case discussed above. Moreover, it appears that this license makes clear that the copyright holder intends to retain some creative control over the copyrighted work overall, while still trying to ensure that the copyrighted material remains as open and available to others as possible under the circumstances.
Modified BSD
After review of the terms and conditions of this license, these terms and conditions appear to be the shortest list of terms and conditions of all of the licenses reviewed in this paper. Additionally this license appears to allow reproduction and modification of the copyrighted material provided certain conditions are met, which if subject to legal challenge, a court might construe as being subject to only protection as a contract, at best, and a bare license at worst. Moreover, based upon the legal authorities cited in this paper, it may be unclear whether this license may provide sufficient copyright protection.
Clear BSD License
After review of the terms and conditions of this license, this license appears to be very similar to the Modified BSD License, in that it is very short, and appears to allow reproduction only if certain conditions are met. This license does make clear that no patent rights are granted by this license.
TopCYBER SECURITY AND SOFTWARE ASSURANCE
As malicious intent is an issue with OSS it is important to deploy software security in the development lifecycle to ensure proper security posture (McGraw, 2004). To do this effectively while minimizing the effort for developing controls, organizations can adopt government cyber security controls from the National Institute of Standards and Technology (NIST) Special Publications (SP) 900 Series to include the Department of Defense (DoD) (Dawson Jr, Crespo, & Brewster, 2013). On April 26, 2010, the DoD released the third version of the Application Security and Development Security Technical Implementation Guide (STIG) provided by the Defense Information Systems Agency (DISA). This STIG can be used as a baseline for software configuration and development. DISA provides STIGs for other system components that can allow for full system hardening that will provide the OSS additional security through defense in depth. This process allows for Availability, Integrity, and Confidentiality (AIC) of the entire system.
In the event of a vulnerability finding within the OSS, the software code may require redesign and implementation. This iterative cycle is costly in time and resources. To truly understand security threats to a system, security must be addressed beginning with the initiation phase of the development process. For an organization this means they must allow the Information Assurance (IA) controls and requirements to drive design and influence the software requirements. Therefore, any identified security threats found during the requirements and analysis phase will drive design requirements and implementation. Security defects discovered can then be addressed at a component level before implementation. The cost of discovery and mitigation can be absorbed within the review, analysis and quality check performed during the design, and implementation of our SDLC. The resultant product is one with security built in rather than security retrofitted. Figure 1 displays the Secure-SDLC (S-SDLC) process in which OSS can be implemented into the development process. For Agile or Scrum this process must be modified to be aligned with that specific design process.
Figure 1. 
Industry standard secure software development life cycle activities
TopCONCLUSION
As indicated in the Jacobsen case, there are some very valuable benefits to open source software, in that it allows for collaboration in the development of new software and technology which can undoubtedly spur innovation and improve many processes and functions that individuals and businesses in our society rely on every day. Consequently one purpose of the law is to protect and preserve individual rights and the creativity of others while at the same time providing a benefit and contributing to society at large. How courts will interpret and protect and/or enforce open source licenses will depend greatly on how well the case can be made that this form of software and use can be beneficial and still comports with the overall interests that copyright law was intended to accomplish in the first place. As with any new development, the law will have to endeavor to ensure that it strikes a delicate balance between the good of the many and the good of the few or the individual. The use of OSS proves to be a positive and viable option with the addition of appropriate cyber security controls to mitigate risks of use in projects.
TopREFERENCES
Avison D. Fitzgerald G. (2003). Information systems development: methodologies, techniques and tools. McGraw Hill.
Brightwell, R., Riesen, R., Underwood, K., Hudson, T. B., Bridges, P., & Maccabe, A. B. (2003, December). A performance comparison of Linux and a lightweight kernel. In Proceedings of Cluster Computing, (pp. 251-258). IEEE. 10.1109/CLUSTR.2003.1253322
Brooke, J. (1996). SUS-A quick and dirty usability scale. Usability Evaluation in Industry189, 194.
Carrier, B. (2002). Open source digital forensics tools: The legal argument. Stake Research Report.
Cheeseman, H. (2013). The Legal Environment of Business and Online Commerce. Academic Press.
Cockburn A. (2002). Agile software development. Boston: Addison-Wesley.
Cockburn A. Highsmith J. (2001). Agile software development, the people factor.Computer, 34(11), 131–133. 10.1109/2.963450
Computer Associates International v. Quest Software, Inc., et al. 333 F.Supp.2d 688, 698 (N.D.Ill. 2004).
Dawson M Jr , E., Crespo, M., & Brewster, S. (. (2013). DoD cyber technology policies to secure automated information systems.International Journal of Business Continuity and Risk Management, 4(1), 1–22. 10.1504/IJBCRM.2013.053089
Dawson M. Rahim E. (2011). Transitional leadership in the defence and aerospace industry: A critical analysis for recruiting and developing talent.International Journal of Project Organisation and Management, 3(2), 164–183. 10.1504/IJPOM.2011.039819
Dawson M. E. Al Saeed I. (2012). Use of Open Source Software and Virtualization in Academia to Enhance Higher Education Everywhere.Cutting-edge Technologies in Higher Education, 6, 283–313. 10.1108/S2044-9968(2012)000006C013
German, D. M., & González-Barahona, J. M. (2009). An empirical study of the reuse of software licensed under the GNU General Public License. In Open Source Ecosystems: Diverse Communities Interacting (pp. 185-198). Springer. 10.1007/978-3-642-02032-2_17
González, Á. L., Mariscal, G., Martínez, L., & Ruiz, C. (2007). Comparative analysis of the accessibility of desktop operating systems. InUniversal Acess in Human Computer Interaction. Coping with Diversity (pp. 676-685). Springer. 10.1007/978-3-540-73279-2_75
Greenbaum E. (2013). Lisping Copyleft: A Close Reading of the Lisp LGPL.International Free and Open Source Software Law Review, 5(1), 15–30.
Jacobsen, v. Katzer, et al 535 F.3d 1373 (Fed. Cir. 2008)
Kumar, S. (2006). Enforcing the Gnu GPL. U. Ill. JL Tech. & Pol'y, 1.
Louridas P. (2006). Static code analysis.Software, IEEE, 23(4), 58–61. 10.1109/MS.2006.114
Mandrusiak L. (2010). Balancing Open Source Paradigms And Traditional Intellectual Property Models to Optimize Innovation. Maine Law Review, 63(1), 303.
McGraw G. (1999). Software assurance for security.Computer, 32(4), 103–105. 10.1109/2.755011
McGraw G. (2004). Software security.Security & Privacy, IEEE, 2(2), 80–83. 10.1109/MSECP.2004.1281254
Paul M. (2011). Official (ISC) 2 Guide to the CSSLP. CRC Press. 10.1201/b10978
Perens, B. (1999). The open source definition. In Open sources: Voices from the open source revolution, (pp. 171-85). Academic Press.
Petersen, R. (2013). Social Networking: Microblogging, IM, VoIP, and Social Desktop. In Beginning Fedora Desktop (pp. 219-227). Apress.
Planetary Motion, Inc., v. Techplosion, Inc., Michael Gay A.K.A. Michael Carson 261 F.3d 1188, 1191 (11th Cir. 2001).
Proffitt B. (2010). Introducing Fedora: Desktop Linux. Course Technology Press.
Progress Software Corporation, et al., v. Mysqlab, et al., 195 F.Supp.2d 328 (D.Mass. 2002)
Project Management Institute (PMI) . (2009). A Guide to the Project Management Body of Knowledge (4th ed.). Philadelphia: PMBOK Guide.
Rahim E. Dawson M. (2010). IT Project Management Best Practices In A Expanding Market.Journal of Information Systems Technology and Planning, 3(5), 59–65.
Rising L. Janoff N. S. (2000). The Scrum software development process for small teams.IEEE Software, 17(4), 26–32. 10.1109/52.854065
Stallman, R. (1991). GNU general public license. Free Software Foundation, Inc. Retrieved from http://www.gnu.org/licenses/licenses.html#GPL
Stallman, R. M. (2013). GNU free documentation license. Academic Press.
Stein M. (2006). Rethinking the UCITA: Lessons from the Open Source Movement.Maine Law Review, 58(1), 157.
Wideman, R. M. (2001). The Future of Project Management. AEW Services. Retrieved February 23, 2014 from http://www.maxwideman.com/papers/future/future.htm

Dawson, M., Omar, M., Abramson, J., & Bessette, D. (2014). The Future of National and International Security on the Internet. In A. Kayem, & C. Meinel (Eds.) Information Security in Diverse Computing Environments (pp. 149-178). Hershey, PA: Information Science Reference. doi:10.4018/978-1-4666-6158-5.ch009

Dawson, M., Omar, M., Abramson, J., & Bessette, D. (2014). The Future of National and International Security on the Internet. In A. Kayem, & C. Meinel (Eds.) Information Security in Diverse Computing Environments (pp. 149-178). Hershey, PA: Information Science Reference. doi:10.4018/978-1-4666-6158-5.ch009

The Future of National and International Security on the Internet

Maurice Dawson, University of Missouri – St. Louis, USA
Marwan Omar, Nawroz University, Iraq
Jonathan Abramson, Colorado Technical University, USA
Dustin Bessette, National Graduate School of Quality Management, USA

TopABSTRACT
Hyperconnectivity is a growing trend that is driving cyber security experts to develop new security architectures for multiple platforms such as mobile devices, laptops, and even wearable displays. The futures of national and international security rely on complex countermeasures to ensure that a proper security posture is maintained during this state of hyperconnectivity. To protect these systems from exploitation of vulnerabilities it is essential to understand current and future threats to include the laws that drive their need to be secured. Examined within this chapter are the potential security-related threats with the use of social media, mobile devices, virtual worlds, augmented reality, and mixed reality. Further reviewed are some examples of the complex attacks that could interrupt human-robot interaction, children-computer interaction, mobile computing, social networks, and human-centered issues in security design.

TopCYBER SECURITY
Cyber terrorism is on the rise and is constantly affecting millions every day. These malicious attacks can affect one single person to entire government entities. These attacks can be done with a few lines of code or large complex programs that have the ability to target specific hardware. The authors investigate the attacks on individuals, corporations, and government infrastructures throughout the world. Provided will be specific examples of what a cyber terrorist attack is and why this method of attack is the preferred method of engagement today. The authors will also identify software applications, which track system weaknesses and vulnerabilities. As the United States (U.S.) government has stated, an act of cyber terrorism is an act of war; it is imperative that we explore this new method of terrorism and how it can be mitigated to an acceptable risk.
Information assurance (IA) is defined as the practice of protecting and defending information and information systems by ensuring their availability, integrity, authentication, confidentiality and non repudiation. This definition also encompasses disaster recovery, physical security, cryptography, application security, and business continuity of operations. To survive and be successful, an enterprise must have a disaster recovery strategy and response plan in place to mitigate the effects of natural disasters (e.g., floods, fires, tornadoes, earthquake, etc.), inadvertent actions by trusted insiders, terrorist attacks, vandalism, and criminal activity. In order to lay the groundwork for this review properly, it is essential to detail current processes techniques being utilized by officials within the government to accredit and certify systems to include their IA enabled products (Dawson, Jr., Crespo, & Brewster, 2013).
TopBACKGROUND
Cyber security has become a matter of national, international, economic, and societal importance that affects multiple nations (Walker, 2012). Since the 1990s users have exploited vulnerabilities to gain access to networks for malicious purposes. In recent years, the number of attacks on United States networks has continued to grow at an exponential rate. This includes malicious embedded code, exploitation of backdoors, and more. These attacks can be initiated from anywhere in the world from behind a computer with a masked Internet Protocol (IP) address. This type of warfare, cyber warfare, changes the landscape of war itself (Beidleman, 2009). This type of warfare removes the need to have a physically capable military and requires the demand for a force that has a strong technical capacity e.g. computer science skills. The U.S. and other countries have come to understand that this is an issue and has developed policies to handle this in an effort to mitigate the threats.
In Estonia and Georgia there were direct attacks on government cyber infrastructure (Beidleman, 2009). The attacks in Estonia rendered the government's infrastructure useless. The government and other associated entities heavily relied upon this e-government infrastructure. These attacks help lead to the development of cyber defense organizations that drive laws and policies within Europe.
TopLAWS AND POLICIES TO COMBAT TERRORISM
The events of 9/ll not only changed policies with the U.S. but also policies with other countries in how they treat and combat terrorism. The United Nations (U.N.) altered Article 51 of the U.N. charter. This article allows members of the U.N. to take necessary measures to protect themselves against an armed attack to ensure international peace and security.
Israel is a country with some of the most stringent policies towards national and international security. This country requires all citizens to serve in the military to include multiple checkpoints throughout the country. This country has utilized stringent checks in the airport long before 9/11, however, now they have additional measures to ensure the nation's security as they are surrounded by countries that have tried to invade before. Israel has also deployed more Unmanned Air Vehicles (UAVs), and Unmanned Ground Vehicles (UGVs) to patrol the border in the event something occurs.
The United Kingdom (U.K.) has the Prevention of Terrorism Act 2005 and the Counter-Terrorism Act 2008 which was issued by Parliament. The first act was created to detain individuals who were suspected in acts of terrorism. This act was intended to replace the Anti-terrorism, Crime and Security Act 200 I as it was deemed unlawful. These acts seem to mirror the same ones, created in the U.S., to monitor potential terrorists and terrorists. The U.K. also shared their information with the U.S. for coordinating individual that may be of risk.
In the U.S., the methods for national security were enhanced to ensure no threats occur on U.S. soil. These changes include enhanced security in all ports of entry. The signing of the Homeland Security Act of 2002 (HS Act) (Public Law 07-296) created an organization that received funding and lots of resources for monitoring the security posture of this country. Additional changes include enhanced monitoring of citizens and residents within the country to prevent terrorist activities by the mention of key words e.g. bomb, explosive, or Al Qaeda.
The USA Patriot was signed into law by President George W. Bush in 2001 after September 11, 200 I (Bullock, Haddow, Coppola, & Yeletaysi, 2009). This act was created in response to the event of 9111 which provided government agencies increased abilities. These increased abilities provided the government rights to search various communications such as email, telephone records, medical records, and more of those who were thoughts of terrorist acts (Bullock, Haddow, Coppola, & Yeletaysi, 2009). This allowed law enforcement to have the upper hand in being proactive to stopping potential acts against U.S. soil. In the 2011 year, President Obama signed an extension on the USA Patriot Act. This act has received criticism from the public due to the potential to be misused or abused by those in power. This act has allowed government agencies to impede on constitutional rights.
The Protecting Cyberspace as a National Asset Act of 2010 was an act that also amends Title 11 of the Homeland Security Act of 2002. This act enhanced security and resiliency of the cyber and communication infrastructure within the U.S. This act is important as the President declared that any cyber aggressions would be considered an act of war. This is also important as Estonia's entire digital infrastructure was taken down by hackers who supported the former Soviet rule. This type of attack could be damaging to the infrastructure in the U.S.- causing loss of power for days or more which could result in death. In an area, such as the Huntsville Metro, we could have multiple nuclear facility melt downs, loss of lSR capabilities, and communication to the warfighter that we are supporting.
Additional changes from this act include the ability to carry out a research and development program to improve cyber security infrastructure. At the moment all government organizations must comply with the Federal Information Security Management Act (FlSMA) of 2002. This act has shown many holes within the U.S. cyber security infrastructure to include those organizations that are leads. This act provides DHS the ability to carry out the duties described in the Protecting Cyberspace as a National Asset Act of 2010.
TopStuxnet Worm
During the fall of 20 l 0 many headlines declared that Stuxnet was the game-changer in terms of cyber warfare (Denning, 2012). This malicious worm was complex and designed to target only a specific system. This worm had the ability to detect location, system type, and more. And this worm only attacked the system if it met specific parameters that were designed in the code. Stuxnet tampered directly with software in a programmable logic controller (PLC) that controlled the centrifuges at Natanz. This tampering ultimately caused a disruption in the Iranian nuclear program.
TopAmerica’s Homeland Security Preparing for Cyber Warfare
The Department of Homeland Security (DHS) is concerned with cyber attacks on infrastructure such as supervisory control and data acquisition (SCADA) systems. SCADA systems are the systems that autonomously monitor and adjust switching among other processes within critical infrastructures such as nuclear plants, and power grids. DHS is worried about these systems as they are unmanned frequently and remotely accessed. As they are remotely accessed, this could allow anyone to take control of assets to critical infrastructure remotely. There has been increasing mandates and directives to ensure any system deployed meets stringent requirements. As the Stuxnet worm has become a reality, future attacks could be malicious code directly targeting specific locations of critical infrastructure.
TopCyber Security Certification and Accreditation Processes to Secure Systems
The Department of Defense Information Assurance Certification and Accreditation Process (DIACAP) is the process that the Department of Defense (DoD) utilizes to ensure that risk management is applied to Automated Information Systems (AIS) to mitigate IA risks and vulnerabilities (Dawson, Jr., Crespo, & Brewster, 2013). DIACAP is the standard process that all services utilize to ensure that all DoD systems maintain IA posture throughout the systems life cycle. DIACAP is the replacement of the Department of Defense Information Technology Security Certification and Accreditation Process (DITSCAP). Figure 2 displays the process which includes five key steps. The first step is to initiate and plan the IA C & A process. The second step is to implement and validate the assigned IA controls. The third step is to make the certification determination and accreditation decision. The fourth step is to maintain authorization to operate and conduct reviews. The final step is to decommission the system.
Figure 2. 
Process for building virtual world representations of real world items
Figure 1. 
DIACAP stages (Department of Defense, 2007)
The Common Criteria (CC), an internationally approved set of security standards, provides a clear and reliable evaluation of security capabilities of Information technology (IT) products (CCEVS, 2008). By providing an independent assessment of a product's ability to meet security standards, the CC gives customers more confidence in the security of products and leads to more informed decisions (CCEVS, 2008). Since the requirements for certification are clearly established, vendors can target very specific security needs while providing users from other countries to purchase IT products with the same level of confidence, since certification is recognized across all complying nations. Evaluating a product with respect to security requires identification of customer’s security needs and an assessment of the capabilities of the product. The CC aids customers in both of these processes through two key components: protection profiles and evaluation assurance levels (CCEVS, 2008).
The CC is the process that replaced the Orange Book. The CC has evaluated assurance levels (EAL) 1 through 7. EAL products 1 through 4 may be used and certified in any of the participating countries. However, EAL 5 through 7 must be certified by the countries national security agency, that is the United States' national agency is the National Security Agency and United Kingdom's national agency is the Communication Electronics Security Group (CESG). By all accounts, the NSA'a Orange Book program, in which the NSA forced vendors through prolonged product testing at Fort Meade, MD was a dismal failure. Also, the government's failure to Orange-Book-tested products, which were often out of date after years of testing, was a blow to the vendors that invested huge sums in the Orange Book Evaluations.
Additionally the National Security Agency (NSA) and DHS sponsors a joint venture known as the National Centers of Academic Excellence in IA Education (CAE/IAE), IA 2-year Education and Training (CAE/2Y) and lA Research (CAE/R) programs. Students that attend institutions with these designations are eligible to apply for scholarships and grants which they repay through government service. These programs were created to address the lack of available talent in lA. Table 1 shows the Committee on National Security Standards (CNSS) that institutions must map to in order to receive the designation as a NSA/IAE.
Table 1. 
CNSS training standards
StandardYearDescription
NSTISSI 40111994Information Systems Security Professionals
CNSS 40122004Senior Systems Manager
CNSS 40132004System Administrators in Information Systems Security
CNSS 40142004Information Systems Security Officers (ISSOs)
NSTISSI 40152000System Certifiers
CNSS 40162005Risk Analysis
Since the purpose was to expand the numbers of lA personnel, it is hard to evaluate the program's real success (Bishop & Taylor, 2009). One of the major problems is the lack of resources to all institutions who are NSA/IAE. Even though this program is targeted towards post high school efforts, more reforms are currently taking place in the K-12 educational areas.
TopHuman Computer Interaction
Future national and international threats that will be directly correlated to the Internet will be many as more devices are added to the Internet the problem of security also multiplies. Richard Clarke mentions that there are currently 12 bill ion devices currently connected to the Internet; this figure is supposed to grow to 50 billion in ten years (Clarke, 2012). Our dependence and interdependence with the internet creates new challenges as the more devices that are put online, the more exposure or vectors we are creating. The number of devices on the Internet is growing exponentially. As more applications for technology and wireless technologies are adopted, we are going to see this grow even further. What comes to mind are the self-driving vehicles that will be coming in a few years. We already have some self-driving cars, but they are not widely adopted yet or available to the public. When this does happen, we are going to see another exponential growth rate and the number of connected devices as each automobile will constitute at least a single IP address if not probably more.
Communication is the on-going and never ending process through which we create our social reality. Never in history has this been truer, as the computing and communication platforms that we have today far exceed anything that has ever been planned or projected. Information technology has radically altered the process in the way people learn and communicate. Weiser notes the most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it (Weiser, 1991). An example has been the explosive growth of SMS texting, email, and social media. As these technologies are weaved into our lives, so are the dangers.
TopResearch Projects
Many of the research projects that have taken place in mixed reality have been in educational domains and military domains. The focus of mixed reality research and education is to expand the capability of students to learn and interact and retain constructed knowledge and for businesses to maximize the knowledge that they have. Interesting new ways of looking at problems and topical areas enhance the learning experience and enhance capabilities, such as the ability to create a physical environment when it does not exist in the real world. Researchers (Park et al., 2008) studied human behavior in urban environments using human subjects in a virtual environment which demonstrated that virtual reality and mixed reality have the capability to model human behavior and that the products of these research projects are useful and may save time and money. Tn many situations, they provide an environment for simulation and analysis and design that would not be possible in the real world.
Most mixed reality devices, at this point, are ru1ming on the internet or another network in order to communicate with one another, connectivity is very important. Since the devices are entering cyberspace, they are going to be exposed to the same sorts of risks that any device connecting to cyberspace will encounter. Researchers (Cheok et al., 2005) state that mixed reality is “the fusion of augmented and virtual realities”. Mixed reality is more than virtual and more than augmented reality, by combining the two we are able to create real time learning environments, research experiments, and knowledge based collaboration areas that are enhanced by the application of mixed reality.
Using games for learning and for entertainment is one of the areas for different types of mixed reality applications. Researchers (Pellerin et al., 2009) describe a profile management technique for multiplayer ubiquitous games. Multiplayer ubiquitous games use different types of net aware skills me network aware objects and network objects such as an RFTD tag which allows the participant to interact with the physical environment. Hardware and software to support multiplayer ubiquitous game MUG is dependent on planning out the architecture in this specific example and NFC smartcard is used as well as a reader and a HTTP server the NFC smartcard communicates with NFC reader which in turn communicates with the HTTP server. This is done in order to create a mechanism which as the authors state's guarantees a stronger identification scheme than just a login password and might help Fortson common online game cheats”. The previous was an example of an approach that is used to handling player profiles and allows interactions and centralized and decentralized ways. This is very similar to the CCNx 1.0 protocol which is also or which also has a goal of allowing centralized and decentralized interactions are communication.
TopVirtual Worlds
With the continual rise of virtual world environments, such as OpenSimulator (OpenSim) and Second Life (SL), they have the ability to be used for positive or negative gains in military warfare in the areas of training (Dawson, 2011). OpenSim is an open source multi-user 3D application server designed by taking the advantage and making a reverse-engineering to the published Application Programming Interface functions (APls) and specific Linden Lab open source parts of the SL code (Dawson & AI Saeed, 2012). One of the strengths for creating any virtual environment is making it accessible by a variety of users through using various protocols. OpenSim provides a method for virtual world developers to create customized virtual worlds easily extensible through using the technologies that fit with their needs. For example, a terrorist could create a virtual representation of a building by using publicly available drafting plans. This virtual representation would serve as scenario based training for terrorists. Additionally, this would allow for terrorists of different cells or groups to communicate freely. The first step would be for the terrorists to decide their targets. Once targets are decided then they would perform research on the target. This research would be on all related items such as technologies, physical infrastructure, and personnel. In the next steps the individual would capture any online maps or building architectural diagrams that would allow these areas to be rendered with the virtual world. Once the rendering of these areas has been completed a mock up scenario would be prepared. This would allow a test run to occur and later a live run. These steps can be prepared with the use of open source technology at no expense to the terrorist. See the figure below which outlines the processes described.
With the possible scenario presented policing the virtual worlds may become a necessity to maintain national security (Parti, 2010). The U.S. Army is currently implementing a program known as Military Open Simulator Enterprise Strategy (MOSES). MOSES runs on OpenSim and is moving towards a Common Access Card (CAC) enabled environment for secure and encrypted communications (Maxwell & McLennan, 2012). In Figure 3 displayed is an interrogation scenario in MOSES. Additionally the U.S. could follow a model similar to Estonia where kids from the age of seven to nineteen learn how to develop software programs. This would help in deterring threats to include having future developers build security into the software from the beginning.
Figure 3. 
MOSES incerro_qacion scenario
TopOpen-Source Software for Cyber Security
Researchers, as well as scientists, have long advocated for the use of open source software for improving the nation's security posture. Open source software can be used as an effective tool in order to protect government networks and defend them against cyber criminals. Corporation, government agencies, and educational institutions have been seriously considering incorporating open source security into their systems security because of the many advantages offered by open-source software; those advantages are exemplified by lower cost ownership, customizability (the ability of modifying the code to meet security requirement) and reasonable security. In fact, DHS has already established a $10 million program to fund research efforts aimed at finding open-source software that could be used for security purposes and boost existing cyber defenses (Hsu, 2011). What is encouraging about the future of open source software for security is that the threat landscape is rapidly changing attacks are becoming highly organized as well as sophisticated, and the cost of commercial software security continues to rise; this trend, in turn gives open source software a cutting edge where businesses and governments are enticed to take advantage of the many benefits offered by open source software. Since the US government is looking for ways to cut costs and business organizations are looking at security as a financial burden; it is a matter of time before open-source software becomes mainstream and a competitive security solution.
TopBack Track Linux
Back track is a Linux-based operating system designed for digital forensics and network penetration testing (Myers, 2012). It is named after the search algorithm, “Backtrack” and is considered an essential security component for all security professionals. Backtrack has become a very popular open source security component for all security professionals and hacker because it contains a set of security tools that can perform virtually any security task ranging from attack simulation and vulnerability assessment to web application security and wireless hacking. Backtrack is mainly a penetration testing tool which is used to assess the security of a network, application or system.
Backtrack Linux is a free open-source software that can be downloaded free from http://www.backtrack linux.org. This security software comes bundled with many other tools that could be installed and run separately from Backtrack; those tools include Nmap, Wireshark, and Metasploit, just to name a few. Backtrack was designed with security in mind, which includes an environment that makes security testing an easy and efficient task for security professionals. It is considered a one-stop-shop and a superior security solution for all security requirements because it offers capabilities that can be used for a variety of security activities such as server exploitation, web application security assessment, and social engineering (BackTrack Linux, 2011).
TopTools and Methods for Monitoring Networks
Monitoring traffic across networks is of great interest to systems administrators due to the fact that this traffic has a tremendous impact on the security of networks and provides them with network situational awareness. The ability to monitor and analyze network traffic in real time can help detect and possibly prevent cyber criminals from breaking into information systems networks. Network monitoring software enables us to understand the state of network and determine the potential existence of malicious or abnormal network behavior. Network monitoring tools can prove valuable in preventing unauthorized access by providing insight into the volume of data traffic that flows over a network, examining and analyzing such data, and ultimately preventing security incidents. Over the years, the open -source security community has developed published open-source tools that are capable of monitoring network traffic and deterring possible attacks. More specifically, open-source software tools are capable of examining most activities within a computer network including malicious activity such as scanning attempts, exploits, network probing, and brute force attacks (Celeda, 2011). Described are some of the most common open-source software tools that are being used for network security monitoring. An example of this is Snort, the open source software developed by Sourcetire and used for intrusion detection and prevention (Snort, 2012). Snort is one of the most widely adopted network monitoring technologies that can be used by network administrators as a defensive technique to report suspicious network traffic activity and alert system administrators about potential cyber-attacks. Snort has gained considerable popularity and attention among other network monitoring tools because it combines the benefits of signature based tools and anomaly detection techniques (Roesch, 1999). Another reason behind Snort popularity and success is that Snort is capable of performing real time traffic analysis and packet logging on IP networks (Tuteja & Shanker, 2012). Furthermore, Snort's strength comes from its intrusion prevention capabilities which is a new feature added to Snort. The intrusion prevention feature allows Snort to take preventive actions, such as dropping or re-directing data packets, against potentially malicious traffic (Salah & Kahtani, 2009).
Nmap (“Network Mapper”) is a free open source utility for discovering networks and performing security auditing (Sadasivam, Samudrala, &Yang, 2005). Nmap is a valuable and widely used network scanner that has the ability to scan rapidly and discover hosts and services by sending specially designed packets to the target host analyzes and responds. NMAP is different from other port scanner software applications in that it does not just send packets at some predefined constant rate, instead, nmap takes into account network conditions such as latency fluctuations, network congestion, and the target interference with the scan during the run time. Nmap has some advanced network discovery capabilities that go beyond basic port scanning and host scanning; Nmap can identify the type and version of an operating system, what type of firewalls are being used on the network, and what listening services are rum1ing on the hosts. Nmap runs on major operating system such as Microsoft windows, Linux, and Solaris. NMAP has become one of the most useful network scanning tools that network administrators cannot afford to ignore especially because this tool has proven to be flexible, intuitive interface (the new Zenmap with the graphical user interface), deployable, cross platform and most importantly it is free.
TopTOOLS AND METHODS FOR NETWORK ATTACKS
Network attacks pose a significant challenge to information systems due to the dramatic impact such attacks have on computer networks. Network attacks could paralyze entire networked systems, disrupt services, and bring down entire networks. In the recent years, network attacks have increased exponentially and have evolved rapidly in complexity to evade traditional network defenses (e.g. intrusion detection systems, amd firewalls). As computer networks grow and evolve to include more applications and services; malicious hackers continue to exploit inevitable vulnerabilities in network based applications. This, in turn, creates a fertile ground for hackers to develop and implement complex attacks and break into critical information assets. Below are a few network attacks illustrating the dangers and consequences of network attacks to include methods to defend against those attacks.
Hackers use a portscan attack, one of the most popular reconnaissance techniques, to break into vulnerable network services and applications. Most of the network services need to use TCP or UPD ports for their connections. Further, a port scan allows hackers to listen via open and available ports by sending a message to each port one at a time and waiting to receive a response. Once the port replies to a message, a hacker would then dig further and attempt to find potential vulnerabilities, flaws, or weaknesses in that port and ultimately launch a port scan attack which can compromise a remote host. The consequences of port scans are numerous and diverse ranging from draining network resources, to congesting network traffic, to actual exploitation of network devices. Cyber criminals utilize a plethora of free, open-source software tools to launch a port scan attack; one of the most popular security tools is Nmap (as explained in the section above). Nmap provides some attractive probing capabilities, such as the ability to determine a host's operating system and to provide a list of potential flaws in a port, all of which could help hackers launch a port scan attack.
Combating a port scan attack requires deploying firewalls at critical locations of a network to filter suspicious or unsolicited traffic. Also, security gateways must be able to raise alerts, and block or shutdown communications from the source of the scan (Check point security, 2004).
A SYN attack which is also known as SYN Flooding targets the TCP/IP stack. It exploits a weakness in the way that most hosts implement the TCP three-way handshake. When Host Y receives the SYN request from X, it maintains the opened connection in a listen queue for at least 75 seconds (Reed, 2003). Many implementations can only keep track of a very limited number of connections. A malicious host can exploit the small size of the listen queue by sending multiple SYN requests to a host thus making the system crash or becoming unavailable to other legitimate connections. The ability of removing a host from the network for at least 75 seconds can be used as a denial-of-service attack, or it can be used as a tool to implement other attacks, like lP Spoofing (Rouiller, 2003). Mitigating this attack requires the implementation of several solutions such as network address translation (NAT), Access control lists (ACL), and routers.
Another attack, which is known as IP address spoofing or IP spoofing, refers to the creation of Internet Protocol (IP) packets with a forged source IP address, called spoofing, with the purpose of hiding the true identity of the packet (sender) or impersonating another host on the network. IP address spoofing is a form of denial of service attacks where attackers attempt to flood the network with overwhelming amounts of traffic without being concerned about receiving responses to attack packets. Implementing packet filters at the router using ingress and egress (blocking illegitimate packets from inside and outside the network) is the best defense against the IP spoofing attack. It's also a good practice to design network protocols in a way that they are not reliant on the IP address source for authentication (Surman, 2002).
TopIssues with Android Phones and Other Mobile Devices
Smartphones are becoming a more integrated and prevalent part of people's daily lives due to their highly powerful computational capabilities, such as email applications, online banking, online shopping, and bill paying. With this fast adoption of smartphones, imminent security threats arise while communicating sensitive personally identifiable information (PII), such as bank account numbers and credit card numbers used when handling and performing those advanced tasks (Wong, 2005Brown, 2009).
Traditional attacks (worms, viruses, and Trojan horses) caused privacy violations and disruptions of critical software applications (e.g., deleting lists of contact numbers and personal data). Malware attacks on smartphones were generally “proof of concept” attempts to break through the phone's system and cause damage (Omar & Dawson, 2013). However, the new generation of smartphone malware attacks has increased in sophistication and is designed to cause severe financial losses (caused by identity theft) and disruption of critical software applications (Bose, 2008). Because smartphones are becoming more diverse in providing general purpose services (i.e., instant messaging and music), the effect of malware could be extended to include draining batteries, incurring additional charges, and bringing down network capabilities and services (Xie, Zhang, Chaugule, Jaeger, & Zhu, 2009).
Smartphones are rapidly becoming enriched with confidential and sensitive personal information, such as bank account information and credit card numbers, because of the functionality and powerful computational capabilities built into those mobile devices. Cyber criminals, in turn, launch attacks especially designed to target smartphones, exploiting vulnerabilities and deficiencies in current defense strategies built into smartphones' operating systems. Bhattacharya (2008) indicated that because of skill and resource constraints, businesses are ill-prepared to combat emerging cyber threats; this claim is true for smartphones as well, given the fact that those mobile devices are even less equipped with necessary protections, such as antivirus and malware protection software. Some services and features, such as Bluetooth and SMS, create attack vectors unique to smartphones and thus expand the attack surface. For example, in December, 2004, A Trojan horse was disguised in a video game and was intended to be a “proof of concept,” which signaled the risks associated with smartphones that could potentially compromise the integrity and confidentiality of personal information contained in smartphones (Rash, 2004). Attackers can easily take advantage of those services provided by smartphones and subvert their primary purpose because they can use Bluetooth and SMS services to launch attacks by installing software that can disable virus protection and spread via Bluetooth unbeknownst to smartphone users.
With the development of it to movative features and services for smartphones, security measures deployed are currently not commensurate because those services and features, such as MMS and Bluetooth, are driven by market and user demands, meaning that companies are more inclined to provide more entertainment features than security solutions. In turn, this further increases vulnerabilities and opens doors for hackers to deploy attacks on smartphones. Furthermore, Mulliner & Miller (2009) argue that the operating systems of smartphones allow the installation of third-party software applications, coupled with the increase in processing power as well as the storage capacity. Scenarios like this pose worse security challenges because hackers could exploit those vulnerabilities, which are further compounded by users' lack of security awareness. Smartphone attackers are becoming more adept in designing and launching attacks by applying attack techniques already implemented on desktop and laptop computers; smartphones' enhanced features, such as music players and video games, produce easy-to exploit targets by sending seemingly benign files via music or video game applications to users and luring them into downloading such files. Becher, Freiling, and Leider (2007) indicated that attackers could exploit such vulnerabilities to spread worms autonomously into smartphones. Therefore, hackers usually use a combination of technical expertise along with some social engineering techniques to trap users into accepting and downloading benign applications, which are used later to execute malicious code and affect critical applications running on smartphones.
Android's core components, such as Linux and connectivity media, are vulnerable to attacks through which personal and confidential information is likely to be compromised. Android's threats are further amplified by the fact that users are limited to using their smartphones for basic services and functions, such as email and SMS/MMS. Users lack the programming mind-set to protect their Android smartphones and stay current with the latest security software updates. This gives hackers an edge to target Android smartphones in the hope of gaining unauthorized access to disable core services (email and web browsing); abuse costly services (i.e., sending MMS/SMS and making calls to high-rate numbers); eavesdrop on calls and most importantly compromise sensitive information to be sold for a price. Android's open-source nature further increases security vulnerabilities because attackers can easily exploit this feature to modify the core applications and install malicious software, which could be used to compromise Android-based smartphones and ultimately cause disruption and monetary loss.
TopDangers of Social Networks
Virtual communication has become a distinct area of interest for many as it has become second nature and also weaved into our everyday life. People tend to create a social reality that is based on the cmmection to the internet and using tools that assist communication. These tools have danger sides that a vast majority does not see or think about on a daily basis. Currently, there has never been a higher danger in the social networks for the public than there is now. This danger is easily spread to everyone who use this mode of communication based that people unintentionally make themselves vulnerable. With a connection to a vast number of social networks, people are easily consumed by submitting personal information via the Internet. The time is now for the public to understand where they stand in the future of the internet connectivity and what they can do to assist or lessen this danger.
TopTrend in Social Networks
People of all ages are beginning to learn to use social networks to stay in touch, reconnect, and meet new people and find out about new places. These websites usually allow the user to present a profile of himself through a long list of very detailed information (Conti, Hasani, & Crispo, 2011). A vast majority of businesses are beginning to use these social networks to find new employees, expand and market and product line, and also to advertise their brand. These primary reasons are based on several distinctions that will help companies grow and expand due to the majority of customers who search for products via social networks. Customers are becoming more tech savvy by using mobile devices to gain internet connectivity in various locations. This helps create a realistic and educational feel to understand specific product information that is only based online.
Social networks have become the largest branding and marketing areas for this era. Sites such as Twitter, Facebook, lnstagram, Pinterest, and many others have risen in this past decade and have continued to increase with customers based on their usability and features. These sites have risen in popularity in the last few years, typically growing from basic technologies as participation increases and user expectations shape and form the media (Fitzgerald, 2008). Increase use within these sites also dictates an increase in the population in users who are becoming friendlier in the social media aspect.
Online social networking sites have become integrated into the routine of modern-day social interactions and are widely used as a primary source of information for most. Research found that Facebook is deeply integrated in user's daily lives through specific routines and rituals (Debatin, Lovejoy, Horn, & Hughes, 2009). Facebook is a social networking tool that is used in various instances that help people connect to people or businesses connect to people. It is the mere change in security that people and businesses will need the most help. These areas are vital to the metamorphic adaptions of today's society. Change is needed, and with this change, new adaptions for online security are required and mandated in some instances.
Online security can be looked at by a virtual standpoint in the relation of consumers and businesses. Many businesses use social media and online social networks to communicate to one another in a sense that many users are also using the same technology to find new information. Online information security risks, such as identity theft, have increasingly become a major factor inhibiting the potential growth of e commerce (Wang, 2010). A base system of online security is needed to help fulfill many business expectations and also promote or generate business in different geographic locations.
TopA Geographic Location
In definition, Hochman et al (2012) defines lnstagram as a recent fad in mobile photo sharing applications that provide a way to snap photos, tweak their images and share then on various social networks with friends, family, and complete strangers (2012). This type of social media helps create a realistic feel for people to see photos of specific areas where people are located. This also helps create a uniformed timeline scheduled photos that describe a story of one's life. As security is a high need in this type of online social media, it is best used in personal and business use.
Pinterest is also a varying tool of online social media that houses has many users who also use other networking tools. Pinterest allows members to “pin” items or images found on the internet to a pin board, which can then be easily shared through an email link or by following the creator (Dudenhoffer, 2012). This networking tool can also be paired up with other social media tools such as Tnstagram, Twitter, and Facebook. These networking tools also create a justification that helps creates a total profile immersion for people virtually. Security within this profile is currently weak; changes and adaptions can help create a justification for areas of higher influences such as these networks.
The popularity of these sites provides an opportunity to study the characteristics of online social network graphs at large scale (Mislove, Marcon, Gummadi, Drushel, & Bhattacharjee, 2007). A leading cause of the rise of these sites has been with consumers fining it easy to use and navigate to find information on and within these sites. The usability of these sites makes it very easy for customers of all ages to navigate through processes, which require personal information. When people subject themselves into giving information to the virtual world, they also subject themselves in becoming vulnerable for virtual threats.
No matter how easy an internet site can be to submit sensitive information, no site is purely safe and danger free. This is why the connection to internet connectivity is a matter that needs to be handled with high importance. A dire need to have security at its maximum has never been such an item then it is today. From every angle, people are becoming vulnerable to attacks from predators who deem themselves capable of obtaining information. As vital information is spread throughout the system of technology based environments, this information can also be spread throughout the world.
TopWHO IS CONNECTED AND WHY?
The main focus for the impact of the digital age is the critical mass population of the people in the world. People are beginning to use, read, analyze, and interact virtually younger and more often than ever before. This preliminary change occurs because now people begin to interact with social networks at a younger age. Learning starts to develop at a younger age because many of the cognitive abilities are beginning to be developed and acquired when children are younger, thus giving them the ability to develop an interest in fields they may want to work in as an adult.
TopBusinesses
Businesses begin to look at the advertising site of their work in relation to how it can assist them with sales and goals. Since marketing is such a large portion of business, brands especially need to look and see what advertisements they can use to assist them in reaching their goals. The future of advertisements and marketing is based on the consumer today, and where they look to find information in regards to their purchases. The future of the internet connectivity and adaptation is directly linked through the suitability of use for the internet.
An important factor to look at when it comes to firm development is to look at the dire need for firms to develop their niche successfully. The virtual sector of businesses is extremely dependent on the absolute use of the user and the internet. Businesses tend to adapt to a series of modules that are formulated with their overall mission as a company. Leaning to technology-based marketing is one way to look at the overall spectrum of the businesses.
Companies are moving online media to the core of their programs because ofhow often consumers use social media for information-gathering purposes (Grainger, 2010). A feature that gives customers a more realistic feel such that they can obtain information in quickly is an online feature. This feature is used in many tactics for marketing and advertising based that many customers are prone to find items that assist them with what they are looking for via online. The connection must also be secure and safe for consumers such that viable personal information is used for online purchases.
Internet connectivity can also create justifications for businesses that have a niche for merchandising to customers, not on location. This avenue also needs an increase in data that are transmitted through various servers and websites. Servers are the basis of information modification; they help provide a space and location for all information to be transmitted within a network. Businesses are taking part of this big shift based that the value of information to be able to be moved with this the-savvy environment is easier than ever. Servers also offer a sense of protection for business material to be saved and updated. Since safety is a big issue due to weather, it is feasible for businesses to purchase servers. Moreover, prices have also decreased with reflection to the safety of a server. Not only does a price reflect an adjustment towards the quality and value of the item at hand, but it is also noted on the overall performance of the machine or equipment.
Institutions are also a main contributor or purchaser of servers and internet connections based that they using online connections more than ever. A main factor that needs to be adjusted with institutions is the feasibility factor of what has been given at hand for the item to be installed, adjusted, and used with the institution. This feasibility also dictates a specific privacy that needs to be labeled at a specific standard for the institution to use. If these privacy areas are not up to a specific level, the institution is not and will not be able to use these items in conjunction with its mission or vision values.
TopSchools
The future of institutions is founded via the internet and the connectivity that these institutions have with the internet. With this, more schools are using web design formats that are very user friendly, such that more information can be placed virtually. With more information being placed virtually, more students will have access virtually to this sort of information. This also gives the institution the power to place a majority of their application processes; faculty related work, as well as communication online and able to be accessed by any faculty member or student at any time.
The vulnerability of this information being accesses by outside threats is high in regards to how secure the information is. Many institutions place a restriction to the limitation of the access of where it can be obtained. This limits the user capability of accessing information. Limiting this information can lead to problems internally as opposed to externally based that not all users will agree and comply with the polity regulations.
Institutions can create a wall that assists the blockage of information sharing through a semi-permeable layer, which is accessed by users and administrators. This barrier helps to control the amount of information sharing that can be displayed accordingly in regards to virtual threats. Creating this wall helps give administrators more control to information that is shared as well as provides a sate avenue for users at the lower level. When a barrio is promoted, it can also create a justification or rule that helps threats stay at bay and never reach core areas of information. The vital elements of this can be displayed in Figure 4.
Figure 4. 
Private cloud enterprise data center (Social-Cast VMware, 2012)
TopWHERE DO PEOPLE CONNECT?
A rise in the digital social media arena has a direct impact towards the world, and with this many of the companies are beginning to respond with technological changes. With increasing technological advances. business can operate more smoothly, more effectively, and more efficiently to better facilitate operations and management. More tools are available for businesses that have the desire to take their business and marketing virtually. This has led to the increase of mobile device use since most users of social media use these applications in various locations.
Marketing has become a direct and distinct changing factor in business competition. With this, more businesses have begun to change their style ru1d location of advertisements. It is a clear example that more customers arc beginning to change their overall plans based on how they are able to obtain information on a general basis. With this, it is assumed that many businesses are also creating new, avenues and paths for marketing advertisements to be able to reach customers at various distances. This is also why it is very important for a business to have the ability to connect over a large geographic area with ease.
It can be determined that social networks via Internet connectivity are the best ways for businesses to connect to people. Businesses of all stature are beginning to look at the possibility of marketing strategies reaching customers in geographic locations. A large trend for mass adoption for businesses is to connect to customers via mobile devices; this in turn will lead to customers being able to connect to businesses at various levels. This trend increases the use of mobile devices based that a majority of the users are using these devices for a purpose.
One item that can be dictated with the high use of internet connectivity and mobile devices is the ability for outside sources to obtain information via mobile. This path will change the overall spectrum of how customers can purchase goods via online and where they can go to gain security for their purchases. It is then up to the providers of the mobile devices to create secure internet services for the customers' sake depending on how the customer is able to cooperate with technology. Even as the security of the internet service increases for the customer, it also must increase at the business end. This is to ensure that all employees and persons involved in online transactions are being monitored by a service that can provide safety throughout the purchase and delivery of the product.
TopINTERNET STALKING
The increase of the social networking trend can be based on the security features of for every user.
Internet stalking can be noted by a threat from an outside source that harms or conflicts harm to a piece of information or person. These threats can international or nation depending on where the organization or user is geographically located. With internet stalking being noted more often in today's society; it is also presumed that people are also becoming more vulnerable to attacks from internet insecurity. Insecure internet can be looked at based on what the user currently is using in terms of connectivity but can always be looked at as a threat to any customer.
When international threats are aimed at consumers, it can be perceived as a threat that is directed to the nation based that it is from outside the country. These circumstances can be legal or illegal based on the source of the threat. Many users see these types of threats as being identified as acts of terror based that many users do not know much information about the types of threats that are visible.
An example is noted from a post on social media that included valuable and private information. Any post can be noted in becoming a threat to outside sources such as a tweet from twitter, a picture from lnstagram, or a post on Facebook. Twitter is less than three years old, commands more than 41 million users as of July 2009 and is growing fast (Kwak, 2010).
Figure 5. 
Post on Facebook, a scoial media application (Mustafa, 2012)
From this point, an invader or Internet stalker can take into account the vital information and begin to look up where the user lives or where the user is updating his or her status. This can be done by researching with the Internet for items which are displayed virtually and can denote where the user lives. In this example, it is noted that the user Mustaza Mustafa is posting this status with Hongkait. Location information for users is stored in the About section of a user's profile on FaceBook.
Google earth is an application that can help look up locations and geographic areas on maps to help determine where items, businesses, and people are located. With this application, Internet stalking can be made easier by a method of inputing a location for a specific item, person, place or business to locate where it is. Since there is a very low security with this application, this option can be used with most location-based information given specific circumstances.
A phenomenon of cyber-stalking and virtual harassment will be the set of focus for the next generation. It is with this type of harassment that schools and institutions become the most vulnerable based on the population of these locations. Areas of improvement will be creating secure environments for students and faculty based on online communication. These areas will be an avenue for major threats as long as they are unsecure for cyber stalkers to pass through and obtain information.
TopSchools
Institutions, schools, colleges and universities can be noted as main areas for Internet stalking based on the number of users who use the Internet to connect, obtain information, and to communicate. With the increase in interest on the social trend, schools and institutions are adapting to modify more programs to be taught online. This adaption has the ability to help increases the student population for the school as well as increases the amount of adult learners who will use the internet to obtain information. This increase can also upset and hurt the population by leading stalker and other predators into getting involved with or becoming involved with internet stalking. As schools and universities become the highest areas for teens and students, they also become the most vulnerable.
Finn (2004) conducted an exploratory study to show that 339 students at the University of New Hampshire, about 10% to 15% of students reported repeated communication threatened, insulted, or harassed, them (Finn, 2004). This type of negative communication can result in various types of lawsuits, endangerment, or even physical harassment, which can lead into negative effects and/or reputations for institutions. Internal and even external customers can be the main causes of threats to the institution based on what information is current stored and what information is being obtained.
University communication and connectivity systems need to be impeccable in order to ensure secure networks for students and faculty. In terms of financially affordable, these systems need to have various departments that are capable of tracking of where the sources are going towards, coming from, and how they are able to obtain information. This type of security is currently necessary and will be necessary for the future with the increase in students and the usability of online platforms. This type of security also creates a unified system with the university's reputation to promote a positive secure environment.
Internet stalking also increases the risk of vulnerability for the institution in terms of international attacks from outside sources. These attacks or acts of terror can be terminated or at the least lessened by having a secure server internet connection. A secure internet connection needs to be set up with many specific requirements such that all users have access to information and communication within this method. Internet connectivity is increasingly moving off the desktop and into the mobile and wireless environment, particularly for specific demographic groups (Lenhart, Purcell, Smith, & Zickuhr, 2010). As the internet connections become a main target point of importance for institutions, security in these areas will also increase based on the amount of users.
TopLeading to Intelligence Gathering
The various types of information that social media customers input via the internet can be viewed and retrieved by outside sources. The information gathered leads to a negative activity from international customers. In various instances, personal, financial information can be gathered and used against the user for purposes of threats that can harm or steal the identity of the user.
TopIntelligence Gathering from Other Countries via Internet Connectivity
With the high trend of social networking scattering the internet's surface, social media are available in every country, thus increasing the use of internet connectivity. This availability of information helps create a mix between businesses and customers in terms of how information is related. Intelligence gathering is one way of using the available information and putting it to good use depending on the source of the receiver. Businesses can use this type of work by targeting special performance enhancing customers who are local and idealistic to the values that the company brings to the table. lt is also valuable in terms of online social marketing because it is feasible for businesses to assist with advertising online as compared to physical.
An international point of view that collaborates intelligence gathering can be noted based that internet connectivity is what brings users from various locations together in one normal new setting. This virtual environment setting becomes a normal atmosphere for many users based that most users are not currently satisfied with physical aspects of businesses. Using intelligence gathering from other countries helps institutions and businesses gather a list of potential customers from varying backgrounds that can help modify the existing performance of the business. A modification for a business is looked at by an increased way information is displayed and given to customers. This method should increase sales within the business, such that there is an absolute return on investment for the business.
Institutions can use intelligence gathering to help create new avenues for students to prosper. With this, distance learning and online collaborative learning can be assisted such that these are the main areas that are affected by the online networking. These changes also increase the power and connectivity of the specific institution to the student learner in the sense that they feel connected and secure. These are the most important items in any aspect of online networking in a business or educational field.
TopPrivacy Laws
The U.S., Canada, and European Union (EU) provide a useful launching pad for the examination of cross-border privacy issues. With this, the U.S. has maintained a severe high maintenance cost for its security in the internet connections. This is a main reason why many institutions and businesses have created variances for what is allowed to be passed via the internet. In creating these variances, it also can be noted on how businesses prepare media and advertisements and also the security in these messages.
With Europe's high trade cost and online businesses, there is a high need for privacy to be placed in situations where customers will feel safe. It is this need that the EU uses to assume and vary its security online. Many businesses are accustomed to this type of development processes such that it is now accustomed to the normal activity for online marketing. Even the applications used via the internet connection do not use instances where privacy can be breached. It is with this type of process that businesses become safe from outside attacks.
TopFUTURE OF INTERNET CONNECTIVITY: SOCIAL NETWORKS
As internet connectivity becomes the more favorable and usable feature in a business industry, many businesses, customers, and people in general will begin to look for more ways to use this type of connection. The basis of a secure internet connection service begins with several items, which dictate how people use the connectivity, what they use it for, and where they use it. Many businesses will also become more conformable with the adaptability of internet usage in terms of security, mobility, and marketing. Overall, social networking is keen to fir development in businesses and keen for connections for people.
TopEMERGING TECHNOLOGIES AND THE INTERNET
Google Glass is a wearable computer and a variant of the head mounted display (HMD). What is interesting about this innovation is that it is more than the headset. Google has connected this to the internet in many ways, not the least of which is being connected to the users Google+ account, which enables the user to share photos and videos with others. Using Google+ the user is connected to all their contacts from their Gmail account. Glass provides a way for the user to interact in different ways with the internet, through the rich media environment that is supported by Google. Google glass could be integrated into internet security in many of the same ways in which the traditional mixed reality system that has been described in this chapter.
Google Glass may not have its uses defined, yet many have made prognostications on uses for the augmented reality system. It is quite a visionary type of product with associated services. Many have recently written about potential uses for Google Glass. Some of the best ideas arc very close to some of the existing fields of virtual and augmented reality. The fact that the actual headset is so innovative, small, and connected, is intriguing and opens the door for many types of new applications or revisiting the old applications with the new technologies. Many envision that Google Glass will be used in the operating room to provide real time information to surgeons, as well as, augmenting education on many different levels.
Emerging technologies that are changing things as we speak is the idea of content centric networking. Xerox PARC is currently developing Content Centric Network (CCN), and making the software open source. One of the advantages of this technology is going to be that the data maintains its integrity no matter where it is transmitted; as there are security keys that are incorporated in their peer to peer demonstration of the CCN, which can ride on top of protocols or run natively. Such technology is essentially for mixed reality environments which necessitate a need for sharing information locally and quickly.
An interesting way that we conceived to view ubiquitous wireless technologies and technologies that represent mixed reality is to view the specific technology or group technologies in a feedback control loop. Using such a model we can construct the following control loop using.
At the beginning of the loop is the need for knowledge and learning, which may be in individuals need, and or a formalized educational program. The next step in the feedback loop is the comparer; in this model it will represent grades, 21st-century technology skills, self-fulfillment and self-efficacy. To the right of the compare is the reducer, which consists of pedagogy and technologies that are ubiquitous wireless, and quite possibly on the mixed reality continuum. Lastly is learning and knowledge which again can represent or present informally or formally, as in individuals self-fulfillment, self-efficacy, educational achievement, mastery of a topic. The feedback control loop moves left to right, in the feedback loop itself runs from learning and knowledge back to the need for knowledge and learning. A chart has been created to help visualize the model.
Figure 6. 
Ubiquitious learning technology control loop
Using this model we make some assumptions, the first is that this is an activity that individuals want to engage in. The second which is illustrated by the comparer, they have a need. The reducer once again is going to representative of the technologies that we arc looking at in this chapter. We arc looking at mobile devices, different types of mixed reality that are used for knowledge creation and collaboration. Therefore we are looking at the technologies in this paper as a reducer in the Ubiquitous Learning Technology Control Loop (ULTCL). While this framework is not predictive in nature, or does not prescribe, it does segment or compartmentalize the topical area which makes it useful for analysis of new and emerging technologies that are found in mixed reality environments.
TopDangers
Cyber-attacks happen on all types of organizations and individuals. They can start in many different places, including any device that's connected to the Internet. This becomes highly problematic in our modern society when we have devices such as copy machines that are hooked up to the Internet in order to update themselves report usage, install software, etc. Having all these devices connected to the Internet increases our exposure and vulnerability. With so many targets we need to create an orderly way to look for threats. As the threats have increased through the years, we become more vulnerable to these threats. An interesting point about the intrusion detection systems is that they are part hardware and part software. Therefore when we implement one of these solutions we need to make sure that we are up to date with the hardware and software maintenance so that we get the updates that will keep the organization safe.
There are many research papers and projects that have demonstrated the usefulness of virtual and mixed reality environments in many different fields. It is important that the cyber warrior believes that they are in a different environment. Believability has been a requirement for successful implementations of mixed reality and virtual reality. Human computer interaction (HCI) is essential to making the cyber warriors feel that they are immersed in cyberspace. Since cyberspace cannot be seen by the naked eye, we need to gather the data and information that is necessary and make the user be able to see it in a virtual and productive environment. The potential of ubiquitous, mobile and mixed reality technologies to deter Internet threats is enhanced by these characteristics, as we now have the ability to have individuals who are in geographically separate areas, work together as one to solve new threats and problems. Mixed reality may be able to bridge the gap of recognition of security threats.
Incorporation of mixed reality should only require the changing of the inputs to the user or cyber warrior from game to actual data and information and the integration and implementation of a head mounted device (HMO) and quite possibly new input devices including brain to game interfacing. The process of creating a visual environment in which users can be active participants with real data with the purpose of solving problems and deterring threats, opens the process up to gamification. This permits the analysis of threats and also using the threat log and data for training as well, including one excited in a game based scenario.
TopDevice Innovation
The characteristics of the devices that we use to connect to the internet are becoming smaller and more powerful. Contemporary mobile devices are extremely powerful, students can gather information off the Internet, download files, take pictures, email, and alter portalble document format (.pdf) files of any document that they have downloaded, analyze, synthesize and type up documents, all without any intervention or training from the university. They can also participate in an online discussion, call, email, text, video chat on certain phones and devices, including Apple iPhone and iPod touch. Such areas and technologies are mentioned, as these platforms illustrate what is possible from a technological standpoint, the critical mass of the technology, and show how they have been adopted into organizations and more importantly the individual as many of the changes that we have seen have been driven by the adoption of the individual and used without any intervention of the organization or university. This makes technology or technological forces great. Thus combined with other forces has helped changed our society, no matter where we live. A ubiquitous device is one that is defined as always connected and allows access to content, anytime and anywhere (Hummel and Hlavacs, 2003). Internet bandwidth has become fast and more importantly wireless and ubiquitous, which has provided for the growth of many types of mobile wireless devices. Figure 7 and 8displays the cyber warrior enviornments to include associated processes.
Figure 7. 
Cyber warrior technolgy infrastructure
Figure 8. 
Cyber warrior scanning and interaction processes
TopEMERGING AREAS IN HUMAN COMPUTER INTERACTION FOR COUNTERING CYBER ATTACKS
One of these areas is the use of head mounted displays (HMDSs) which may use spatial immersive display technology. By using these types of devices we can create environments which help reduce some of the complexity that is involved with detection of cyber threats. Seeing mixed reality used in this capacity has been seen in popular fiction literature. Ender's Game (1985) is a science fiction novel by Orson Scott Card. In the novel the most talented young are trained using vit1ual reality and augmented reality games. The US military has been using virtual reality for training and development. Specific examples include soldiers shooting and field training with armor, infantry, aviation, artillery, and air defense. One of the first modern implementations of this was at the Defense Advanced Research Projects Agency (DARPA) Simnet facility at Ft. Knox, KY in 199Os. Inside ofthe facility were multiple types of units that had a representation of the vehicle and tools that they would use on the battlefield. As one of the authors was a participant in this event, computers made a compelling and immersive environment for units to train together and against one another. It is not hard to imagine extending this type of technology in order to create an immersive environment that makes the detection of cyber threats easier to identify from the mass of data and information that may or may not be detected using more conventional means.
Mixed reality will make it easier to find these threats by a reduction of the complexity. Reducing complexity and increasing the understandability of threats will make it easier to work in an environment in which portions can be turned on and off. Current virtual reality and gaming technologies allow for the generation of the monitoring and subsequent training environments described. Of the first elements in a project that would be used to protect and monitor cyber-attacks would be to create a 3-D world in which systems/cyberspace can be modeled. Since many of the cyberspace attacks cyber-attacks target specific cities towns and businesses, we can use the geography as a starting point. From the generalized location we can create a highly granular or defined area of vulnerability and concern. This can be done by using many existing geographical databases such as Google. After this step we can focus on the mechanics of the 3-D world many tools are available for this purpose including dark basic which is a game engine that can house the navigation and parameters of the 3-D world. Therefore most if not all of the hardware and software systems and technology do exist for the creation and implementation of such a system.
Currently there are many open source and commercial versions of software that will permit the player/user to work against an AT or human opponent in order for the development of their ethical hacking skills. Users that immerse themselves in this type of technology are helping develop their skills sets, the future use of virtual and mixed reality to these types of systems will only enhance the understanding and help the user prepare for work as a cyber-warrior. Google code currently has a project emu-os that is a simulation (“Emu-os- EmuOS is an open-source hacking game and simulator. - Google Project Hosting,” 2012) that pits hacker against hacker, by doing this the user is gaining real-world and real-time experience.
The latest IDC predictions at the time of writing this chapter show that mobile devices are passing out PCs in how users connect to the internet. Software as a service (SaaS) and Platforms as a service (PaaS) are all reporting exponential growth which helps confirm the mobile computing trends that we are seeing. There are many different reports out that describe the most popular mobile devices, a common theme among all of them is the smart phone and tablets (“IDC Predicts 2013”, 2012). Therefore we can see that the internet infrastructures are changing to meet the needs of a more mobile device oriented market. Mobile device security is going to become even more important as more people are going to be using these devices for all sorts of tasks, including those oriented around virtual and mixed reality.
TopEnhanced Collaboration and Learning with New Technologies
Collaboration is important and is enhanced with virtual and mixed reality systems. Collaboration is an important part of the new learning paradigm and has been proven effective in the supporting collaboration as e-learning tools are readily available on most mobile devices. Collaboration is enhanced by the use of mobile technologies and is a key intention of the knowledge age. Turoff (2000) proposed that collaboration provides a solution to learning outside of the physical classroom. In addition to collaboration, facilitation, and updated educational methodologies are key components to E-learning and M-learning (Hiltz, Benbunan-Fich, Coppola, Rotter, & Turoft, 2000). Collaborative learning is the promotion of learning through social interaction; it is one of the five properties identified by Klopfer et al., (2004), which support the established forms of learning through the use of mobile technologies (Naismith, Lonsdale, Vavoula, & Sharples, 2004). Figure 9 shows the Post University Cyber Lab with network forensics sosfteware running during a live class demonostartion.
Figure 9. 
Post university cyber lab
TopSystems of Systems Concepts
When discussing hyperconnectivity it is necessary to discuss systems of systems concepts. Systems of systems is a collection of systems tied together to create a more complex system (Popper, Bankes, Callaway, & DeLaurentis, 2004). An example of this is Figure 10 below which displays a few methods to be connected to the internet and network traffic scenarios. When thinking about the possibilities of hyperconnectivity the personal area network (PAN) is an excellent example as it allows multiple technologies to be interconnected with soil ware applications. The Google Glass has the potential to all global positioning system (GPS), social media, digital terrain overlays, and synchronization with other devices. This increases the complexity of the system as it becomes part of a larger systems which multiples the number of potential vulnerabilities.
Figure 10. 
Systems of systems
TopCONCLUSION
The futures of national and international security depend on multiple complex countermeasures to ensure that a proper security posture throughout its lifecycle. To effectively protect these systems from exploitation of vulnerabilities, it is a necessity to further comprehend all the current threats and how they exploit the current vulnerabilities. Additionally, one must be able to effectively gauge the future threats and have a strong grasp on the laws that drive their need to be secured such as enhanced privacy laws by the national governments. Examined within this chapter are the potential security related threats with the use of social media, mobile devices, virtual worlds, augmented reality, and mixed reality. Further reviewed were examples of the complex attacks that could interrupt human-robot interaction, children computer interaction, mobile computing, social networks, and more through human centered issues in security design. This book chapter serves as a guide to those who use multiple wired and wireless technologies but fail to realize the dangers of being hyperconnected.
TopREFERENCES
Armbrust, M., Fox, A., Griftlth, R., Joseph, A. D., Katz, R., Konwinski, A., & Zaharia, M. (201 0). A view of cloud computing.Communications oft he ACM, 53(4), 50-58.
BackTrack Linux. (2011). BackTrack Linux. Retrieved March 22, 2013, from www.backtracklinux.org
Becher, M., Freiling, F., & Leider, B. (2007). On the effort to create smartphone worms in Windows Mobile. In Proceedings ofthe 2007 IEEE Workshop on Information Assurance. United States Military Academy. Retrieved March 22, 2013, from http://pil.informatik.uni-mannheim. de/filepool/publications/on-the-effort-to-create-smartphone-worms-in-windows-mobile.pdf
Beidleman, S. W. (2009). Defining and Deterring Cyber War. Barracks, PA Army War College. Retrieved March I0, 2013, from http://www.hsdl.org/?abstract&doc=ll8653&coll=limited
Bhattacharya, D. (2008) Leardership styles and information security in small businesses: An empirical investigation. (Doctoral dissertation, University of Phoenix). Retrieved March 9, 2013, from www.phoenix.edu/apololibrary
Bishop, M., & Taylor, C. (2009). A Critical Analysis of the Centers of Academic Excellence Program. In Proceedings ofthe 13th Colloquium for Information Systems Security Education (pp. 1-3). Retrieved March 9, 2013, from http://nob.cs.ucdavis.edu/bishop/papers/2009-cisse/
Bose, A. (2008). Propagation, detection and containment of mobile mal ware. (Doctoral dissertation, University of Michigan). Retrieved March 11, 2013, from www.phoenix.edu/apololibrary
Brown, B. (2009). Beyond Downadup: Security expert worries about smart phone, TinyURL threats: Malware writers just waiting for financial incentive to strike, F-Secure exec warns. Retrieved March 20, 2013, from http://business.highbeam.com/409220/article-1G1-214585913/beyond-downadup-securityexpert-worries-smart-phone
Bullock J. Haddow G. Coppola D. Yeletaysi S. (2009). Introduction to homeland security: Principles of all-hazards response (3rd ed.). Burlington, MA: Elsevier Inc.
National Security Agency, Common Criteria Evaluation and Validation Scheme (CCEVS). (2008). Common criteria evaluation and validation scheme -- Organization, management, and concept of operations (Version 2.0). Retrieved from National Information Assurance Partnership website: http://www.niap-ccevs.org/policy/ccevs/scheme-pub-l.pdf
Celeda, P. (20II). Network security monitoring and behavior analysis. Retrieved March 22nd, 2013 from http://www.terena.org/activities/campus-bp/pdf/gn3-na3-t4-cbpd133.pdf
Cheok A. , Fernando, 0., & Liu, W. (2008). The magical world of mixed reality. Innovation: The Magazine ofResearch and Technology.National University of Singapore and World Scientific Publishing, 8(1), 70–73.
Cheok A. (2009). Mixed Reality Entertainment and Art.International Journal Of Virtual Reality, 8(2), 83–90.
Cheok A. Man Fung H. Yustina E. Shang Ping L. (2005). Mobile Computing With Personal Area Network and Human Power Generation.International Journal of Software Engineering and Knowledge Engineering, 15(2), 169–175. 10.1142/S0218194005002348
Clarke, R. & Knake, R. (2010). Cyber war: The next threat to national security and what to do about it. New York, NY: Ecco.
Conti, M., Hasani, A., & Crispo, B. (2011). Virtual Private Social Networks. In Proceedings of the First ACM Conference on Data and Application Security and Privacy. New York, NY: ACM.
Dawson, M. (2011). Applicability of Web 2.0: Training for Tactical Military Applications. Global TIME, 1, 395-398.
Dawson M. E. Jr Crespo M. Brewster S. (2013). DoD cyber technology policies to secure automated information systems.International Journal of Business Continuity and Risk Management, 4(1), 1–22. 10.1504/IJBCRM.2013.053089
Dawson M. E. Saeed AI , T. (2012). Use of Open Source Software and Virtualization in Academia to Enhance Higher Education Everywhere.Culling-edge Technologies in Higher Education, 6, 283–313. 10.1108/S2044-9968(2012)000006C013
Debatin B. Lovejoy J. P. Horn A. K. Hughes B. N. (2009). Facebook and Online Privacy: Attitudes, Behaviors, and Unintended Consequences. Journal of Computer-Mediated Communication, 15(1), 83–108. 10.1111/j.1083-6101.2009.01494.x
Denning D. E. (2012). Stuxnet: What Has Changed?Future Internet, 4(3), 672–687. 10.3390/fi4030672
Dudenhoffer C. (2012). Pin lt! Pinterest as a Library Marketing Information Literacy Tool. College & Research Libraries News, 73(6), 328–332.
Dyck, J., Pinelle, D., Brown, B., & Gutwin, C. (2003). Learning from Games: HCI Design Innovations in Entertainment Software. InProceedings of Graphics Interface, (pp. 237-246). Retrieved March 18, 2013, from http:/lhci.usask.ca/publica tions/view.php?id=88
EPOC Features. (2012). Retrieved fromhttp://www.emotiv.com/epoc/features.php
Finn J. (2004). A Survey of Online Harassment at University Campus. Journal of Interpersonal Violence, 19(4), 468–483.
Fitzgerald, D. C. (2008). Intersections of the Self: Identity in the Boom of Social Media (Doctoral Dissertation). Available from ProQuest Dissertations and Thesis Full Texts Database: http:!/search.proguest.com/docview/304607151
Fraser M. Hindmarsh J. Best K. Heath C. Biegel G. Greenhalgh C. Reeves S. (2006). Remote Collaboration Over Video Data: Towards Real-Time e-Social Science.Computer Supported Cooperative Work, 15(4), 257–279. 10.1007/s10606-006-9027-y
Google Project Hosting. (2012). Emu-as- EmuOS Is an Open-source Hacking Game and Simulator. Retrieved March 11,2013, from http://code.google.com/p/emu-os/
Grainger, J. (2010). Social Media and the Fortune 500: How the Fortune 500 Uses, Perceives and Measures Social Media as a Marketing Tool (Doctoral Dissertation). Available from ProQuest Dissertations and Thesis Full Texts Database: https://cdr.lib.unc.edu/indexablecontent?id=uuid:ae530f99-9b8d-43a4-9fa4-9fl2c5b00a2l&ds=DATA FILE
Hiltz S. R. Benbunan-Fich R. Coppola N. Rotter N. Turoff M. (2000). Measuring the Importance of Collaborative Learning for the Effectiveness of ALN: A Multi-Measure, Multi-Method Approach.The Journal of Asynchronous Learning, 4(2), 103–125.
Hochman, N., & Schwartz, R. (2012). Visualizing Instagram: Tracing Cultural Visual Rhythms, Association for the Advancement of Artificial Intelligence. In Proceedings of Sixth International AAAI Conference on Weblogs and Social Media. Retrieved March 18, 2013 from, http://www.aaai.org/ocs/index.php/lCWSM/lCWSM12/paper/viewFile/4782/5091
Hsu, J. (n.d.). U.S considers open-source software for Cyber security. Retrieved March 22, 2013, from http://www.teclmewsdaily.com/2644-cybersecurity-open-source.html
Hummel K. A. Hlavacs H. (2003). Anytime, Anywhere Learning Behaviour Using a Web Based Platform for a University Lecture. InProceedings of SSGRR 2003. Aquila.
Kwak, H., Lee, C., Park., H., & Moon, S. (2010). What is Twitter, a Social Network of News Media?. In Proceedings of the 19111 International Conference on World Wide Web. Academic Press.
Lenhart, A., Purcell, K., Smith, A., & Zickuhr, K. (2010). Social Media & Mobiler Internet Use Among Teens and Young Adults. Pew Research Center. Retrieved March 20, 2013, from http://web.pewinternet.org//media/Files/Reports/2010/PlPSocialMediaandYoungAdultsReportFina!withtoplines.pdf
Lewis, B. K. (2012). Social Media and Strategic Communications: Attitudes and perceptions Among College Students (Doctoral Dissertation). Available from ProQuest Dissertations and Thesis Full Texts Database: http://www.prsa.org/Intelligence/PRJ ournal/Documents/2012LewisN ichols.pdf
Lopez C. (2009). Immersive technology melds Hollywood, warrior training.Soldiers, 64(5), 27.
Lotring A. (2005). Training the millennlal sailor.U.S. Naval Institute Proceedings, 131(12), 36–37.
Mac, R. (2013). No One Is More Excited For Google Glass Than Facebook CEO Mark Zuckerberg. Retrieved March 28, 2013 from http://www.forbes.com/sites/ryanmac/2013/02/21/no-one-is-moreexcited-for-google-glass-than-facebook-ceo-mark-zuckerberg/
Maxwell, D., & McLennan, K. (2012). Case Study: Leveraging Government and Academic Partnerships in MOSES. In Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications, (pp. 1604-1616). Academic Press.
Mislove, A., Marcon, M., Gummadi, K. P., Drushel, P., & Bhattacharjee, B. (2007). Measurement and Analysis of Online Social Networks. In Proceedings of the 7th ACM SIGCOMM Conference on Internet Measurement, (pp. 29-42). ACM.
Mulliner, C., & Miller, C. (2009). Injecting SMS messages into smartphones for security analysis. In Proceedings of the 3rd USENlX Workshop on Offensive Technologies. Retrieved March 22,2013 from https://www.usenix.org/legacy/events/woot09/tech/full papers/mulliner.pdf
Mustafa, M. (2012). How to Customize the 'Via' Status on Facebook Posts, Hongkait.com Inspiring Technology. Retrieved on April 18, 2013, from http://www.hongkiat.com/blog/customize-facebookstatus/
Myers, S. (2012). Operative BackTrack. Journal of On Demand Hacking, 1(3), 60-66.
Naismith, L., Lonsdale, P., Vavoula, G. & Sharples, M. (2006). Literature review in mobile technologies and learning. Futurelab Series. Retrieved March 22, 2013, from http://www2.futurelab.org.uk/resources/documents/1itreviews/MobileReview.pdf
Omar, M., & Dawson, M. (2013, April). Research in Progress- Defending Android Smartphones from Malware Attacks. In Proceedings of 2013 Third International Conference on Advanced Computing and Communication Technologies (pp. 288-292). Rohtak, India: IEEE.
Park S. R. Nah F. F. Dewester D. Eschenbrenner B. (2008). Virtual World Affordances: Enhancing Brand Value.Journal of Virtual Worlds Research, 1(2), 1–18.
Parti, K. (2011). Actual Poling in Virtual Reality - A Cause of Moral Panic or a Justitied Need?. InTech. Retrieved March 22, 2013, from http://www.intechopen.com/books/virtua1-real ity/actua1poIicing-in-virtua1-real ity-a-cause-of-moralpanic-or-a-justified-need-
Perens, B. (1999). The open source definition. In Open sources: Voices.from the open source revolution, (pp. 171-85). Academic Press.
Popper, S., Bankes, S., Callaway, R., & DeLaurentis, D. (2004). System-of-Systems Symposium: Report on a Summer Conversation. Arlington, VA: Potomac Institute for Policy Studies.
Qualman E. (2013). Socialnomics: How Social Media Transforms the Way We Live and Do Business (2nd ed.). Hoboken, NJ: John Wiley & Sons.
Raento M. Oulasvirta A. Eagle N. (2009). Smartphones: An Emerging Tool for Social Scientists. Journal of Social Methods & Research, 37(3), 426–454. 10.1177/0049124108330005
Rajabhushanam, C. C., & Kathirvel, A. A. (2011). System of One to Three Umpire Security System for Wireless Mobile Ad hoc Network. Journal Of Computer Science, 7(12), 1854-1858.
Rash, W. (2004). Latest skulls Trojan foretells risky smartphone future. Retrieved from www.eweek.com
Reed, D. (2003). Applying the OSI seven layer network model to information security. Retrieved March 22, 2013, from http://www.isd.mel.nist.gov/projects/processcontrol/members/minutes/7-Sep-2004/0SI.pdf
Roesch, M. (1999). SNORT-Lightweight Intrusion Detection for Networks. In Proceedings of LISA '99: 13th USENlX conference on System administration. Retrieved March 18, 2013, from https://www.usenix.org/legacy/events/!isa99/fuII papers/roesch/roesch.pdf
Sadasivam K. Samudrala B. Yang A. (2005). Design of Network Security Projects Using Honeypots. Journal of Computing Sciences in Colleges, 20(4), 282–293.
Salah K. Kahtani A. (2009). Improving snort performance under linux. Communications, JET, 3(12), 1883–1895.
Sexton, S. (2011). What is the Percieved Impact of Social Media on Personal Relationships in Adolescence? (Doctoral Dissertation). Available from ProQuest Dissertations and Thesis Full Texts Database: http://gradworks.umi.com/15/03/1503092.html
Siegel A. Denny W. Poff K. W. Larose C. Hale R. Hintze M. (2009). Survey on Privacy Law Developments in 2009: United States Canada, and the European Union, The American Bar Association Press. The Business Lawyer, 65(1), 285–307.
Snort. (2012). What is Snort?. Retrieved March 20, 2013, from www.snort.org
Socialcast. (2012). Managing and Control Your Private Network. Retrieved on April 22, 2013, from http://www.soc ialcast.com/adm in istration
Surman, G. (2002). Understanding Security using the OSI Model. Retrieved March 25, 2013, from http://www.sans.org/reading room/whitepapers/protocols/understanding-security-osi-model 377
TDC. (2012). IDC Predicts 2013 Will Be Dominated by Mobile and Cloud Developments as the IT Industry Shifts Tnto Full-Blown Competition on the 3rd Platform. Retrieved March 22, 2013, from https://www.idc.com/getdoc.jsp?containerld=prUS23814112
Turoff M. (2000). An End to Student Segregation: No more separation between distance learning and regular courses. Horizon, 8(1), 1–7. 10.1108/10748120010803294
Tuteja, A. & Shanker, R. (2012). Optimization of Snort for Extrusion and Intrusion Detection and Prevention. International Journal ofEngineering Research and Applications, 2(3), 1768-1774.
Uitzil L. (2012). Wireless security system implemented in a mobile robot.International Journal of Computer Science Issues, 9(4), 16.
Walker, J. J. (2012). Cyber Security Concerns for Emergency Management, Emergency Management. InTech. Retrieved April 2013, from http://www.i ntec hopen.com/books/emergency-management/cy ber-secu rity-concerns-for-emer gene ymanagement
Wang, P. A. (2010). The Effect of Knowledge of Online Security Risks on Consumer Decision Making in B2C e-Commerce (Dissertation Thesis). ProQuest LLC.
Weiser, M. (1991). The computer for the 21st century. Scientific American, 265(3), 94-104.
Wong, L. (2005). Potential Bluetooth vulnerabilities in smartphones. Retrieved March 18, 2013, from http://citeseerx. ist.psu.edu
Xie, L., Zhang, X., Chaugule, A., Jaeger, T., & Zhu, S. (2009). Designing system-level defenses against cellphone malware. Retrieved March 21, 2013, from www.cse.psu.edu