Complexity Science in Cyber Security

[*]1. Introduction

[*]Computers and the Internet have become indispensable for homes and organisations alike. The dependence on them increases by the day, be it for household users, in mission critical space control, power grid management, medical applications or for corporate finance systems. But also in parallel are the challenges related to the continued and reliable delivery of service which is becoming a bigger concern for organisations. Cyber security is at the forefront of all threats that the organizations face, with a majority rating it higher than the threat of terrorism or a natural disaster.

[*]In spite of all the focus Cyber security has had, it has been a challenging journey so far. The global spend on IT Security is expected to hit $120 Billion by 2017 [4], and that is one area where the IT budget for most companies either stayed flat or slightly increased even in the recent financial crises [5]. But that has not substantially reduced the number of vulnerabilities in software or attacks by criminal groups.

[*]The US Government has been preparing for a “Cyber Pearl Harbour” [18] style all-out attack that might paralyze essential services, and even cause physical destruction of property and lives. It is expected to be orchestrated from the criminal underbelly of countries like China, Russia or North Korea.

[*]The economic impact of Cyber crime is $100B annual in the United states alone [4].

[*]There is a need to fundamentally rethink our approach to securing our IT systems. Our approach to security is siloed and focuses on point solutions so far for specific threats like anti viruses, spam filters, intrusion detections and firewalls [6]. But we are at a stage where Cyber systems are much more than just tin-and-wire and software. They involve systemic issues with a social, economic and political component. The interconnectedness of systems, intertwined with a people element makes IT systems un-isolable from the human element. Complex Cyber systems today almost have a life of their own; Cyber systems are complex adaptive systems that we have tried to understand and tackle using more traditional theories.

[*]2. Complex Systems – an Introduction

[*]Before getting into the motivations of treating a Cyber system as a Complex system, here is a brief of what a Complex system is. Note that the term “system” could be any combination of people, process or technology that fulfils a certain purpose. The wrist watch you are wearing, the sub-oceanic reefs, or the economy of a country – are all examples of a “system”.

[*]In very simple terms, a Complex system is any system in which the parts of the system and their interactions together represent a specific behaviour, such that an analysis of all its constituent parts cannot explain the behaviour. In such systems the cause and effect can not necessarily be related and the relationships are non-linear – a small change could have a disproportionate impact. In other words, as Aristotle said “the whole is greater than the sum of its parts”. One of the most popular examples used in this context is of an urban traffic system and emergence of traffic jams; analysis of individual cars and car drivers cannot help explain the patterns and emergence of traffic jams.

[*]While a Complex Adaptive system (CAS) also has characteristics of self-learning, emergence and evolution among the participants of the complex system. The participants or agents in a CAS show heterogeneous behaviour. Their behaviour and interactions with other agents continuously evolving. The key characteristics for a system to be characterised as Complex Adaptive are:


  • The behaviour or output cannot be predicted simply by analysing the parts and inputs of the system
  • The behaviour of the system is emergent and changes with time. The same input and environmental conditions do not always guarantee the same output.
  • The participants or agents of a system (human agents in this case) are self-learning and change their behaviour based on the outcome of the previous experience

[*]Complex processes are often confused with “complicated” processes. A complex process is something that has an unpredictable output, however simple the steps might seem. A complicated process is something with lots of intricate steps and difficult to achieve pre-conditions but with a predictable outcome. An often used example is: making tea is Complex (at least for me… I can never get a cup that tastes the same as the previous one), building a car is Complicated. David Snowden’s Cynefin framework gives a more formal description of the terms [7].

[*]Complexity as a field of study isn’t new, its roots could be traced back to the work on Metaphysics by Aristotle [8]. Complexity theory is largely inspired by biological systems and has been used in social science, epidemiology and natural science study for some time now. It has been used in the study of economic systems and free markets alike and gaining acceptance for financial risk analysis as well (Refer my paper on Complexity in Financial risk analysis here [19]). It is not something that has been very popular in the Cyber security so far, but there is growing acceptance of complexity thinking in applied sciences and computing.

[*]3. Motivation for using Complexity in Cyber Security

[*]IT systems today are all designed and built by us (as in the human community of IT workers in an organisation plus suppliers) and we collectively have all the knowledge there is to have regarding these systems. Why then do we see new attacks on IT systems every day that we had never expected, attacking vulnerabilities that we never knew existed? One of the reasons is the fact that any IT system is designed by thousands of individuals across the whole technology stack from the business application down to the underlying network components and hardware it sits on. That introduces a strong human element in the design of Cyber systems and opportunities become ubiquitous for the introduction of flaws that could become vulnerabilities [9].

[*]Most organisations have multiple layers of defence for their critical systems (layers of firewalls, IDS, hardened O/S, strong authentication etc), but attacks still happen. More often than not, computer break-ins are a collision of circumstances rather than a standalone vulnerability being exploited for a cyber-attack to succeed. In other words, it’s the “whole” of the circumstances and actions of the attackers that cause the damage.

[*]3.1 Reductionism vs Holisim approach

[*]Reductionism and Holism are two contradictory philosophical approaches for the analysis and design of any object or system. The Reductionists argue that any system can be reduced to its parts and analysed by “reducing” it to the constituent elements; while the Holists argue that the whole is greater than the sum so a system cannot be analysed merely by understanding its parts [10].

[*]Reductionists argue that all systems and machines can be understood by looking at its constituent parts. Most of the modern sciences and analysis methods are based on the reductionist approach, and to be fair they have served us quite well so far. By understanding what each part does you really can analyse what a wrist watch would do, by designing each part separately you really can make a car behave the way you want to, or by analysing the position of the celestial objects we can accurately predict the next Solar eclipse. Reductionism has a strong focus on causality – there is a cause to an affect.

[*]But that is the extent to which the reductionist view point can help explain the behaviour of a system. When it comes to emergent systems like the human behaviour, Socio-economic systems, Biological systems or Socio-cyber systems, the reductionist approach has its limitations. Simple examples like the human body, the response of a mob to a political stimulus, the reaction of the financial market to the news of a merger, or even a traffic jam – cannot be predicted even when studied in detail the behaviour of the constituent members of all these ‘systems’.

[*]We have traditionally looked at Cyber security with a Reductionist lens with specific point solutions for individual problems and tried to anticipate the attacks a cyber-criminal might do against known vulnerabilities. It’s time we start looking at Cyber security with an alternate Holism approach as well.

[*]3.2 Computer Break-ins are like pathogen infections

[*]Computer break-ins are more like viral or bacterial infections than a home or car break-in [9]. A burglar breaking into a house can’t really use that as a launch pad to break into the neighbours. Neither can the vulnerability in one lock system for a car be exploited for a million others across the globe simultaneously. They are more akin to microbial infections to the human body, they can propagate the infection as humans do; they are likely to impact large portions of the population of a species as long as they are “connected” to each other and in case of severe infections the systems are generally ‘isolated’; as are people put in ‘quarantine’ to reduce further spread [9]. Even the lexicon of Cyber systems uses biological metaphors – Virus, Worms, infections etc. It has many parallels in epidemiology, but the design principles often employed in Cyber systems are not aligned to the natural selection principles. Cyber systems rely a lot on uniformity of processes and technology components as against diversity of genes in organisms of a species that make the species more resilient to epidemic attacks [11].

[*]The Flu pandemic of 1918 killed ~50M people, more than the Great War itself. Almost all of humanity was infected, but why did it impact the 20-40yr olds more than others? Perhaps a difference in the body structure, causing different reaction to an attack?

[*]Complexity theory has gained great traction and proven quite useful in epidemiology, understanding the patterns of spread of infections and ways of controlling them. Researchers are now turning towards using their learnings from natural sciences to Cyber systems.

[*]4. Approach to Mitigating security threats

[*]Traditionally there have been two different and complimentary approaches to mitigate security threats to Cyber systems that are in use today in most practical systems [11]:

[*]4.1 Formal validation and testing

[*]This approach primarily relies on the testing team of any IT system to discover any faults in the system that could expose a vulnerability and can be exploited by attackers. This could be functional testing to validate the system gives the correct answer as it is expected, penetration testing to validate its resilience to specific attacks, and availability/ resilience testing. The scope of this testing is generally the system itself, not the frontline defences that are deployed around it.

[*]This is a useful approach for fairly simple self-contained systems where the possible user journeys are fairly straightforward. For most other interconnected systems, formal validation alone is not sufficient as it’s never possible to ‘test it all’.

[*]Test automation is a popular approach to reduce the human dependency of the validation processes, but as Turing’s Halting problem of Undecideability[*] proves – it’s impossible to build a machine that tests another one in all cases. Testing is only anecdotal evidence that the system works in the scenarios it has been tested for, and automation helps get that anecdotal evidence quicker.

[*]4.2 Encapsulation and boundaries of defence

[*]For systems that cannot be fully validated through formal testing processes, we deploy additional layers of defences in the form of Firewalls or network segregation or encapsulate them into virtual machines with limited visibility of the rest of the network etc. Other common techniques of additional defence mechanism are Intrusion Prevention systems, Anti-virus etc.

[*]This approach is ubiquitous in most organisations as a defence from the unknown attacks as it’s virtually impossible to formally ensure that a piece of software is free from any vulnerability and will remain so.

[*]Approaches using Complexity sciences could prove quite useful complementary to the more traditional ways. The versatility of computer systems make them unpredictable, or capable of emergent behaviour that cannot be predicted without “running it” [11]. Also running it in isolation in a test environment is not the same as running a system in the real environment that it is supposed to be in, as it’s the collision of multiple events that causes the apparent emergent behaviour (recalling holism!).

[*]4.3 Diversity over Uniformity

[*]Robustness to disturbances is a key emergent behaviour in biological systems. Imagine a species with all organisms in it having the exact same genetic structure, same body configuration, similar antibodies and immune system – the outbreak of a viral infection would have wiped out complete community. But that does not happen because we are all formed differently and all of us have different resistance to infections.

[*]Similarly some mission critical Cyber systems especially in the Aerospace and Medical industry implement “diversity implementations” of the same functionality and centralised ‘voting’ function decides the response to the requester if the results from the diverse implementations do not match.

[*]It’s fairly common to have redundant copies of mission critical systems in organisations, but they are homogenous implementations rather than diverse – making them equally susceptible to all the faults and vulnerabilities as the primary ones. If the implementation of the redundant systems is made different from the primary – a different O/S, different application container or database versions – the two variants would have different level of resilience to certain attacks. Even a change in the sequence of memory stack access could vary the response to a buffer overflow attack on the variants [12] – highlighting the central ‘voting’ system that there is something wrong somewhere. As long as the input data and the business function of the implementation are the same, any deviations in the response of the implementations is a sign of potential attack. If a true service-based architecture is implemented, every ‘service’ could have multiple (but a small number of) heterogeneous implementations and the overall business function could randomly select which implementation of a service it uses for every new user request. A fairly large number of different execution paths could be achieved using this approach, increasing the resilience of the system [13].

[*]Multi variant Execution Environments (MVEE) have been developed, where applications with slight difference in implementation are executed in lockstep and their response to a request are monitored [12]. These have proven quite useful in intrusion detection trying to change the behaviour of the code, or even identifying existing flaws where the variants respond differently to a request.

[*]On similar lines, using the N-version programming concept [14]; an N-version antivirus was developed at the University of Michigan that had heterogeneous implementations looking at any new files for corresponding virus signatures. The result was a more resilient anti-virus system, less prone to attacks on itself and 35% better detection coverage across the estate [15].

[*]4.4 Agent Based Modelling (ABM)

[*]One of the key areas of study in Complexity science is Agent Based Modelling, a simulation modelling technique.

[*]Agent Based Modelling is a simulation modelling technique used to understand and analyse the behaviour of Complex systems, specifically Complex adaptive systems. The individuals or groups interacting with each other in the Complex system are represented by artificial ‘agents’ and act by predefined set of rules. The Agents could evolve their behaviour and adapt as per the circumstances. Contrary to Deductive reasoning[†] that has been most popularly used to explain the behaviour of social and economic systems, Simulation does not try to generalise the system and agents’ behaviour.

[*]ABMs have been quite popular to study things like crowd management behaviour in case of a fire evacuation, spread of epidemics, to explain market behaviour and recently financial risk analysis. It is a bottom-up modelling technique wherein the behaviour of each agent is programmed separately, and can be different from all other agents. The evolutionary and self-learning behaviour of agents could be implemented using various techniques, Genetic Algorithm implementation being one of the popular ones [16].

[*]Cyber systems are interconnections between software modules, wiring of logical circuits, microchips, the Internet and a number of users (system users or end users). These interactions and actors can be implemented in a simulation model in order to do what-if analysis, predict the impact of changing parameters and interactions between the actors of the model. Simulation models have been used for analysing the performance characteristics based on application characteristics and user behaviour for a long time now – some of the popular Capacity & performance management tools use the technique. Similar techniques can be applied to analyse the response of Cyber systems to threats, designing a fault-tolerant architecture and analysing the extent of emergent robustness due to diversity of implementation.

[*]One of the key areas of focus in Agent Based modelling is the “self-learning” process of agents. In the real world, the behaviour of an attacker would evolve with experience. This aspect of an agent’s behaviour is implemented by a learning process for agents, Genetic Algorithm’s being one of the most popular technique for that. Genetic Algorithms have been used for designing automobile and aeronautics engineering, optimising the performance of Formula one cars [17] and simulating the investor learning behaviour in simulated stock markets (implemented using Agent Based models).

[*]An interesting visualisation of Genetic Algorithm – or a self-learning process in action – is the demo of a simple 2D car design process that starts from scratch with a set of simple rules and end up with a workable car from a blob of different parts:

[*]The self-learning process of agents is based on “Mutations” and “Crossovers” – two basic operators in Genetic Algorithm implementation. They emulate the DNA crossover and mutations in biological evolution of life forms. Through crossovers and mutations, agents learn from their own experiences and mistakes. These could be used to simulate the learning behaviour of potential attackers, without the need to manually imagine all the use cases and user journeys that an attacker might try to break a Cyber system with.

[*]5. Conclusion

[*]Complexity in Cyber systems, especially the use of Agent Based modelling to assess the emergent behaviour of systems is a relatively new field of study with very little research done on it yet. There is still some way to go before using Agent Based Modelling becomes a commercial proposition for organisations. But given the focus on Cyber security and inadequacies in our current stance, Complexity science is certainly an avenue that practitioners and academia are increasing their focus on.

[*]Commercially available products or services using Complexity based techniques will however take a while till they enter the mainstream commercial organisations.


[*][1] J. A. Lewis and S. Baker, “The Economic Impact of Cybercrime and Cyber Espionage,” 22 July 2013. [Online]

[*][2] L. Kugel, “Terrorism and the Global Economy,” E-Internatonal Relations Students, 31 Aug 2011. [Online].

[*][3] “Cybersecurity – Facts and Figures,” International Telecommunications Union, [Online].

[*][4] “Interesting Facts on Cybersecurity,” Florida Tech University Online, [Online].

[*][5] “Global security spending to hit $86B in 2016,” 14 Sep 2012. [Online].

[*][6] S. Forrest, S. Hofmeyr and B. Edwards, “The Complex Science of Cyber Defense,” 24 June 2013. [Online].

[*][7] “Cynefin Framework (David Snowden) – Wikipedia” [Online].

[*][8] “Metaphysics (Aristotle) – Wikipedia” [Online].

[*][9] R. Armstrong, “Motivation for the Study and Simulation of Cybersecurity as a Complex System,” 2008.

[*][10] S. A. McLeod, Reductionism and Holism, 2008.

[*][11] R. C. Armstrong, J. R. Mayo and F. Siebenlist, “Complexity Science Challenges in Cybersecurity,” March 2009.

[*][12] B. Salamat, T. Jackson, A. Gal and M. Franz, “Orchestra: Intrusion Detection Using Parallel Execution and Monitoring of Program Variants in User-Space,” Proceedings of the 4th ACM European conference on Computer systems, pp. 33-46, April 2009.

[*][13] R. C. Armstrong and J. R. Mayo, “Leveraging Complexity in Software for Cybersecurity (Abstract),” Association of Computing Machinery, pp. 978-1-60558-518-5, 2009.

[*][14] C. Liming and A. Avizienis, “N-VERSION PROGRAMMINC: A FAULT-TOLERANCE APPROACH TO RELlABlLlTY OF SOFTWARE OPERATlON,” Fault-Tolerant Computing, p. 113, Jun1995.

[*][15] J. Oberheide, E. Cooke and F. Jahanian, “CloudAV: N-Version Antivirus in the Network Cloud,” University of Michigan, Ann Arbor, MI 48109, 2008.

[*][16] J. H. Holland, Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence, Michigan: University of Michigan Press, 1975.

[*][17] K. &. B. P. J. Wloch, “Optimising the performance of a formula one car using a genetic algorithm,” Parallel Problem Solving from Nature-PPSN VIII, pp. 702-711, January 2004.

[*][18] P. E. (. o. D. Leon, “Press Transcript,” US Department of Defense, 11 Oct 2012. [Online].

[*][19] Gandhi, Gagan; “Financial Risk Analysis using Agent Based Modelling”, [Online]:

[*][*] Alan Turing – a mathematician who came to fame for his role in breaking the Enigma machines used to encrypt communication messages during the second world war – proved that a general algorithm whether or not a program would even terminate (or keep running forever) for all program-input pairs cannot exist.

[*][†] Deductive reasoning is a ‘top-down’ reasoning approach starting with a hypothesis and data points used to substantiate the claim. Inductive reasoning on the other hand is a ‘bottom-up’ approach that starts with specific observations which are then generalised to form a general theory.

Posted in Uncategorized | Leave a comment

Market Global Structure

A multinational firm’s organizational structure that reflects the “global” philosophy that the world is basically one homogeneous market is called a “global structure.” For example, by this philosophy, many large electronics and consulting firms, while allowing for minor local adjustments to packaging and language, basically project the same kinds of products and services around the world. However, there are several differences in terminology and philosophy in this field.

First, a “global” philosophy is characterized by seeing the world as one more-or-less monolithic market with similar tastes and preferences. In contemporary parlance this is opposite to a “multidomestic” (or multinational or multilocal) philosophy by which one sees the world as made up of many more-orless unique markets, each with its distinct tastes and preferences. A position between these two extremes is called regionalism, whereby one sees the world as being made up of a small number of quite homogenous regions. These constructs can be applied to industries, firms, and organizational structures, and it is informative to understand how global thinking at industry and strategic levels apply.

For example, George Yip sees globalization as a function of the degrees to which the global marketplace is fragmented, local customer needs are distinct, local sourcing imperatives exist, costs are heterogeneous, and trade barriers are significant to cross-border commerce. Thus Randall Schuler, Peter Dowling, and Helen De Cieri and other scholars refer to some industries-like commercial aircraft, copiers, generic drugs, most electronics and computer hardware-as global industries; while retail, the food industry, and most services are considered substantially multidomestic.

Multinationals-and other large firms, for that matter-generally are divided into several parts, units, or divisions that reflect some aspect of their strategy. This link between structure and strategy was made famous in the classic book Strategy and Structure by Alfred DuPont Chandler. For example, a firm with five product categories may have been structured into five divisions, each division mandated to manage one of the product categories. Chris Bartlett and Sumantra Ghoshal build on this logic as they focus on organizational responses to global and local forces; and they describe four organizational types (or mentalities) for the global organization that represent organizational and strategic responses to various industry contingencies. For example, they describe the global firm that views the world as its market, assumes that national tastes are more similar than different, and that believes in standardized products; and these strategic approaches require structural integrative mechanisms that are to coordinate worldwide activities, production, marketing, research and development (R&D), and planning.

Thus, it is these structural processes that are implied by the term global structure. Mechanisms All large organizations need some structures that coordinate and integrate to some degree. However, the global strategy relies on these structures for implementation There are three major aspects to this kind of structure. The first is the locus of strategic responsibility. Second, the way the structure separates reporting relationships and dictates how the firm is divided. This aspect of structure may be called structuring. The final aspect is the kinds of coordination and integration systems-these may be called processes.

Locus of strategic responsibility: A crucial aspect of organization structure is the extent to which decision- making autonomy is delegated from corporate headquarters to parts of the business. In the global firm there is a strategic imperative to centralize important strategic decisions. For example, decisions on product range, research and development, branding, and human resource management tend to be made at corporate rather than subsidiary level. Even customer service, which is the function most likely to be located closer to the customer, may have its major policies and standards set at corporate level. Structuring: A characteristic of the global structure is that it is relatively blind to geographic distance and instead focuses on one or more other strategic dimensions-like products or markets-that it considers more important (than geography) to its success at implementing a global strategy.

Thus a global structure commonly has a major top-level division into product categories (generally called a global product structure), markets (global market structure), or some matrix (global matrix structure). As an example of a global product structure, Procter & Gamble (P&G) has three global product divisions, namely Global Beauty, Global Household Care, and Global Health & Well-being. However, the distinction between product and market structures is likely to be blurred-for example, Boeing’s business units seem like different product divisions (commercial airplanes, integrated defense systems, and Boeing capital corporation), but in effect all three have the aim of marketing various aircraft and aerospace products and services to different market groups-in this case commercial airlines, governments, and financial intermediaries.

The global matrix structure attempts to organize activities by two (or more) managerial dimensions-like product, geography, and/or market. For example H. J. Heinz has simultaneously geographic divisions in North America, Europe, Australia/New Zealand, and emerging markets (selected countries in Asia and eastern Europe); several product categories, namely ketchup/condiments/sauces, meals and snacks (including frozen foods), soups/beans and pasta, and infant feeding; and separate operations for retail and food service channels. In a global structure these various departmental and business divisions may have necessary aspects of local focus, but essentially they work together for implementing the firm’s global strategy.

Processes: Finally, and very importantly, structure implies processes such as coordination, integration, and information systems. These processes tend to be pronounced in the global structure, and generally very common in contemporary organizations. Kwangsoo Kim and Jong-Hun Park identify four generic integrating mechanisms: (1) people-based integrating mechanisms that use people to coordinate business operations across borders, involving the transfer of managers, meetings, teams, committees, and integrators; (2) information-based integrating mechanisms use information systems such as databases, electronic mail, Internet, intranet, and electronic data interchanges to integrate business operations across borders; (3) formalization-based integrating mechanisms rely on the use of standardized or common work procedures, rules, policies, and manuals across units; and (4) centralization-based integrating mechanisms retain decision-making authority at the corporate headquarters-a similar concept to that in the “locus of strategic responsibility” section above.

The more global the firm, the more it uses these processes. Intel, for example, uses relatively few formal structural mechanisms, but several cross functional teams-including information technology (IT), knowledge management, human resources, finance, legal, change control, data warehousing, common directory information management, and cost reduction teams-as integrating processes that allow them rapid adaptation to changing conditions. Integrating mechanisms can also have negative effects-perhaps tying the hands of local managers, imposing compliance costs (both time and other resources), and creating unintended bureaucratic barriers to efficient decision making. A study by David Brock and Ilene Siscovick, for example, found effects of integrative factors at subsidiary level were often negative.

Posted in Uncategorized | Leave a comment

MOVING TO CANADA IS A JOKE…The Economic, Military, and Social Integration of North America

Eh? I can think of scores of reasons to move to Canada (or opt for the Mexican Riviera) . . . any place but Babylon the Great: The USA. The War in Iraq . . . or better yet: The entire Military-Industrial Complex sucking the life-blood out of Americana for starters. Or, how about the whole hedonistic culture of greed, avaricious appetites, and super sizing all things godly and ungodly–from Hollywood to Mega Churches; indeed, ours is a “city set on a hill which cannot be hid” but the closer you get to this glittering jewel, the more it resembles the “Little Shop of Horrors,” you know, that flesh-eating plant crying out: Feed me, Seymour! Conspicuous consumption of a nation which spends $1.8 Billion more each day than the whole earth combined and finds herself some $14 Trillion in debt (National Debt + Balance of Payment/Trade Debts) is a bit too much, wouldn’t you say?–after all, she represents but 5% of the world’s population.

Come on, half the eagle is in a declared state of emergency and the overt identification by Big Brother of all things human is prepared and/or is itching to pounce upon American liberties once thought sacrosanct by both the ACLU and the NRA by euphemistic legislation called Patriotic Acts, and finally, a cashless society where all of us are implanted with chips awaiting true identity and debit through scanning devices at your local Safeway.

The clock is ticking. Peak oil, where American’s “zero sum game” is played out–for you to gain I must loose–refuses to share her bounty with the Asian tigers of China and India; and, of course, they are more than pleased with our indulgence. Like Rome, our legions amongst the world’s “provinces,” are stretched thin–and the draft can’t be all that far off if we’re to maintain our economic edge and SUV-lifestyle (latest stats for the past two years show that 58% of all vehicles purchased in the USA are SUVs, pickups, or plain old gas guzzlers). And, as if these outrageous consequences weren’t enough to abandon ship–toss in the worst natural disaster ever to afflict the homeland: Katrina; man, wait till we finance that one!

So . . . isn’t it about time to flee to Canada or head for the Mexican Riviera? Eh? Canada’s a safe haven for pot-people and same-sex marriage is the rage. Crime’s relatively low compared to the lower 48 and the death penalty’s been outlawed for nearly thirty years. Finally, most of the 125,000 Viet-Nam Era draft dodgers who fled to Canada stuck around and now constitute the leading edge of all the above progressive life-style. Wow, we’re talkin’ about socialized medicine for all–a veritable paradise compared to the inflictions of them patriots down under. Cheap drugs (includes tons of cannabis), affordable housing, tiny military budget, etc., etc.–a little cold, but you’ll get used to it.

Finally, if Hollywood’s collective apoplexy over President Bush’s election can be believed–we’re outta here . . . a few of these righteous indignations (unfulfilled) are duly noted, if for nothing else, their entertainment value. Notwithstanding the Hollywood stars and directors who claimed exodus was their only option under Bush–Barbra Streisand, Alec Baldwin, Michael Moore, Robert Altman, Lynn Redgrave, Pierre Salinger (now deceased), and Cher–all found the allure of Babylon on the Hudson irresistible; so much for leftist vibratos. Misquoted or just plain fluff–they all abide within the walls of the crystal palace celebrating the party atmosphere, as they star in a movie sequel to the “Left Behind Series” entitled: Talk is Cheap, Follow Us.


Patriots would exclaim we’re selling off and out America; globalists would see dollars galore; socialists would see an on-going rip off; and a whole bunch of people in the middle could care less (a.k.a. “victims anonymous”).

Meanwhile Deanna Spingola in “Building a North American Community” (July 15, 2005) keeps telling it like it is:

“While our sons, daughters, sisters, brothers, mothers and fathers having been spilling their blood in the sands of Iraq under the guise of restoring the country to the Iraqi citizens, our president is in the process of giving our country to the elite One World Order insiders. While our president is requiring protected borders in Iraq, he is obliterating, not only our southern, but our northern borders.”

Actually, Deanna (and you’ve got to read her entire article) is referring to the Bush/Fox/Martin meeting (USA/Mexico/Canada) held at Baylor University in Waco, Texas on 23 March 2005, where they were busy about establishing the “Security and Prosperity Partnership of North America” – to wit, the SPPNA’s troika:

“We, the elected leaders of Canada, Mexico, and the United States, have met in Texas to announce the establishment of the Security and Prosperity Partnership of North America.

“Over the past decade, our three nations have taken important steps to expand economic opportunity for our people and to create the most vibrant and dynamic trade relationship in the world (i.e., NAFTA; my insert). Since September 11 2001, we have also taken significant new steps to address the threat of terrorism and to enhance the security of our people.

“But much still remains to be done. In a rapidly changing world, we must develop new avenues of cooperation that will make our open societies safer and more secure, our businesses more competitive, and our economies more resilient.

“Our Partnership will accomplish these objectives through a trilateral effort to increase the security, prosperity, and quality of life of our citizens. This work will be based on the principle that our security and prosperity are mutually dependent and complementary, and will reflect our shared belief in freedom, economic opportunity, and strong democratic values and institutions. It will also help consolidate our efforts within a North American framework, to meet security and economic challenges, and promote the full potential of our people, by reducing regional disparities and increasing opportunities for all.”


Now don’t go conspiratorial on me . . . hee-hee . . . don’t need to . . . let the truth speak for itself:

It was on May 17, 2005 the CFR formalized its “Independent Task Force” to review at length the parameters of such a three-pact agreement among the USA, Canada, and Mexico. This 31-member force de jure was chaired by John F. Manley, Pedro Aspe, and William F. Weld and vice-chaired by: Robert A. Pastor, Thomas P. d’Aquino, Andrés Rozental. Cooperating with the CFR’s efforts were the Canadian Council of Chief Executives and the Consejo Mexicano de Asuntos Internacionales.

Indeed, the composite document released by the aforementioned is the very title of Spingola’s article . . .

No wonder that Spingola and other American patriots view this as the “Great American Give-a-way!”

Take a gander at their timid prognostications and guess why moving to Canada’s a joke . . .for what NAFTA (North American Free Trade Agreement) and CAFTA (Central American Free Trade Agreement) could not destroy, FTAA (Free Trade Area/Agreement of the Americas . . . a.k.a. “Building a North American Community”) fully intends:

“We are asking the leaders of the United States, Mexico, and Canada to be bold and adopt a vision of the future that is bigger than, and beyond, the immediate problems of the present . . . they could be the architects of a new community of North America, not mere custodians of the status quo.” (Canadian co-chair, John P. Manley, Former Canadian Deputy Prime Minister and Minister of Finance).


Now, listen to Spingola’s assessment of all this–and, don’t think she’s some brainless Libertarian gone amok down in Texas somewhere . . .

“This basically means that Americans must give up their freedoms and hard won sovereignty along with all resources for the greater good of the ‘New Community.’ It is a socialistic equalization designed to make slaves of everyone in all three countries. This will occur as a result of the secret, subversive activities of our ruling elitist who have never sacrificed anything except their integrity. When it comes time to sell this socialistic venture, Bush will adopt his multipurpose ‘Christian’ stance and use every possible guilt maneuver to encourage this good hearted Christian country to open our hearts to the less fortunate. This is a ploy to make all of us less fortunate. There will be many who will fall for this scam under the pretext of Christianity. If we think Christians are media maligned now, just wait! We will be the most hated inmates in the camp!”

Wow! Powerful projections here, right? I’m sure we’ll somehow meet up with Spingola one day–if not in glory, then in some gulag cell contemplating how all of this got out of hand . . . I mean, if Shirley McClain went out on a limb, Spingola’s going out on a twig:

“All of this is done under the facade of protecting us – from terrorists? The worse terrorists we face are those who serve in our government. Another day that shall live in infamy, 9/11, has done much to serve the purposes of those whose main goal is to establish the One World Order. What an opportunistic event! It couldn’t have worked any better if they had planned it!”

O CANADA – VIVA MEXICO – Life is good!

Of course most Americans, Canadians, and Mexicans can’t stomach all of this unification at once; thus, the GREAT TRANSITION awaits us all:

Unified military command? Listen to what the CFR plans for your future:

1. Establish a common security perimeter by 2010.

2. Develop a North American Border Pass with biometric identifiers.

3. Develop a unified border action plan and expand border customs facilities.

The CFR web site is effusive in its sacrifice of sovereignty:

4. Create a single economic space:

5. Adopt a common external tariff.

6. Allow for the seamless movement of goods within North America.

7. Move to full labor mobility between Canada and the U.S.

8. Develop a North American energy strategy that gives greater emphasis to reducing emissions of greenhouse gases – a regional alternative to Kyoto.

Hey, and let’s shoot the gap – listen, we’re talkin’ INTEGRATION BIG TIME . . . and we’re not whistling Dixie (although we might permit that in the new North American Federation of United States as an expression of multiculturalism–harkening back to the good ole days when a different form of slavery abounded) . . . so, we might have to:

9. Review those sectors of NAFTA that were excluded.

10. Develop and implement a North American regulatory plan that would include “open skies and open roads” and a unified approach for protecting consumers on food, health, and the environment.

11. Expand temporary worker programs and create a “North American preference” for immigration for citizens of North America.

12. Spread benefits more evenly:

13. Establish a North American Investment Fund to build infrastructure to connect Mexico’s poorer regions in the south to the market to the north.

14. Restructure and reform Mexico’s public finances.

15. Fully develop Mexican energy resources to make greater use of international technology and capital.

16. Institutionalize the partnership:

17. Establish a permanent tribunal for trade and investment disputes.

18. Convene an annual North American summit meeting.

19. Establish a Tri-national Competition Commission to develop a common approach to trade remedies.

20. Expand scholarships to study in the three countries and develop a network of Centers for North American Studies.


Now, that doesn’t sound so bad–in point of fact, we can sort of ease ourselves into this new North American “framework.” Especially enlightening are the PROGRESSIVE comments of people like William F. Weld (another co-chair) former Governor of Massachusetts and U.S. Assistant Attorney General:

“We are three liberal democracies; we are adjacent; we are already intertwined economically; we have a great deal in common historically; culturally, we have a lot to learn from one another.”

Three democracies? Now, let’s not mince words like “Democracy” vs. “Republic” – we all know what we’re talking about here, right? ” . . . and to the Republic for which it stands.” Of course, that’s been turned into a prayer–so much for the Republic, and so much for allegiance! Oh well, we weren’t all cracked up to be much of a Republic anyway, right?

CFR’s been around since 1921 and Mexico/Canada duplicated these efforts in 1976 (Mexican Council on Foreign Relations and the Canadian Council of Chief Executives). CFR members frequently address House and Senate meetings who attempt to probe academia at the highest levels–and, of course, the CFR is right up there with the best (if not THE best) of them (you know, big bucks and all). Enter Dr. Robert A. Pastor, V-P, International Affairs Professor/Director, Center for North American Studies American University, speaking on the “Travel Initiative” within the Western Hemisphere.


Dr. Pastor (fourth Musketeer amongst the three) didn’t have to travel far . . . he simply sat there before the Senate Foreign Relations Subcommittee on Western Hemisphere, Peace Corps and Narcotics Affairs on June 9, 2005 and quoted the CFR’s “Independent Task Force” (May 17, 2005) and their fine efforts to frame a new “Future of North America.”

In sum, Dr. Pastor accused the three nations of “small-mindedness” – and felt that in order to “secure the homeland” we shouldn’t zero in on U.S. interest; instead, let’s stop worrying about our borders with Mexico and Canada and let her rip! Let’s start thinking global here in North America.

First, let’s integrate the economies–we’re well on our way on that one; next, let’s get down socially, especially in light of 9/11 (terror, as in Nazi Germany, is a real catalyst for “positive change”); and, thirdly, let’s forge ahead and go beyond these petty terrorists to a create a massive North American Community–no-holds-bared thinking is needed around this bastion of isolationism! All for one and one for all . . . each benefiting from the other’s success, while avoiding our demise when we concentrate on our problems–that’s the ticket (whoops, travel-talk; no pun intended).

Pastor’s Center for North American Studies at American University is designed to think dynamically, think huge–let the tide rise and pick us all up! Feel the surge! As a CFR member, Pastor poured praise upon NAFTA–no big deal if a few jobs have gone south. In the totality of it all, we all benefit, for is not the dictum we hold these truths to be self-evident that all men are created equal, or is it that some are more equal than others, or that some benefit more than others? Oh, well, we can work out the details later, because the War on Terror trumps all.

Sure, we all might have to work for lower wages and benefits for starters; and we might, if we want to hold up the current social order, work longer hours–but just think: More of us will be working–Dads, Moms, daughters, brothers, etc. And, once in a great while, we’ll be allowed to vacation–not bad, right?


During the latest round of Russian-American talks between Bush and Putin (this week, Sept. 11-17), the biggie was energy policy: How to secure Russian petroleum over the protestations of the Chinese and Indians? Energy–now that’s why the USA-Canada-Mexico deal is absolutely mandatory.

The lion’s share (58%) of our imported oil comes from Canada and Mexico anyway, right? So, isn’t it about time we awoke and smelled the roses? Economically, we’re integrated and America’s insatiable love with the automobile demands that we fully integrate the livin’ tar out of these economies. Canada is our biggest trading partner anyway–86% of all Canadian exports go to America, whereas 89% of Mexico’s go north–somehow we consume it all!

“Since the enactment of the North American Free Trade Agreement eight years ago, Mexico has surpassed Japan and has become the United States’ second-largest trading partner–Canada is the biggest. Thanks to the open trading borders, companies in all three countries have been stretching their reach throughout North America.” (Forbes, 2002).

What a difference a little time makes! Yep, ole NAFTA keeps workin’ its wonders. Now, over 40% of U.S. trade with Canada is intra-firm, that is, trade occurring between parts of the same firm operating on both sides of the border–and, that trade totals over $500 billion annually between the two. Insofar as Mexico is concerned, both nations are on line for $300 Billion for 2005! Thus, our little North American Free Trade Agreement is pressing in around $1 Trillion annually, import/export.

Pastor claims that 9/11 actually threatened to cripple “North American long-term integration.” Help me understand the problem, Dr. Pastor. For one little ditty, we went from 3 million illegals in the 1990s to 11 million today–we’re getting integrated in spite of your rhetoric!

So, the theory goes, Canada is rich in resources (and Mexico’s Gulf Oil too), and Mexico is rich in cheap labor–so can’t we all just get along? If we’re getting integrated economically, in any event, maybe we can all enjoy cheaper oil at the pumps–say, by the way, that bridge in Brooklyn is going for half off this weekend only! In conclusion–can’t we all just get along and drive off in our Hummers into Baja’s sunset or up through the Yukon’s wilderness?


“For nearly two years now, Ottawa has been quietly negotiating a far-reaching military cooperation agreement, which allows the US Military to cross the border and deploy troops anywhere in Canada, in our provinces, as well station American warships in Canadian territorial waters. This redesign of Canada’s defense system is being discussed behind closed doors, not in Canada, but at the Peterson Air Force base in Colorado, at the headquarters of US Northern Command (NORTHCOM).

“The creation of NORTHCOM announced in April 2002, constitutes a blatant violation of both Canadian and Mexican territorial sovereignty. Defense Secretary Donald Rumsfeld announced unilaterally that US Northern Command would have jurisdiction over the entire North American region. Canada and Mexico were presented with a fait accompli. US Northern Command’s jurisdiction as outlined by the US DoD (Department of Defense) includes, in addition to the continental US, all of Canada, Mexico, as well as portions of the Caribbean, contiguous waters in the Atlantic and Pacific oceans up to 500 miles off the Mexican, US and Canadian coastlines as well as the Canadian Arctic.

“NorthCom’s stated mandate is to ‘provide a necessary focus for [continental] aerospace, land and sea defenses, and critical support for [the] nation’s civil authorities in times of national need.'”

  • []
  • Surprise, surprise! You see, ever since FDR sprung his “attack on Canada is an attack on the USA,” and the Canadians reciprocated in 1938, this little military integration thing has really gotten serious. Maybe you thought NORTHCOM was just for the Canadians–apparently, the only guys squawking about it are them draft dodgers worried about Sam’s long reach. Don’t think we’re talking about a slight modification here for the Canadian Mounted Police and the Mexican Federales . . .

    Donald Rumsfeld postured that North America’s geographic command center (smack dab in the Middle of the USA) “is part of the greatest transformation of the Unified Command Plan since its inception in 1947.”

    Furthermore, as Army, Air, Naval, and Special Forces integrate, NATO and NORAD’s old frameworks are and will get a complete facelift–i.e., you ain’t seen nothin’ yet! To facilitate this shakeup (between the USA and Canada) a Binational Planning Group (BPG) now works in tandem with NORTHCOM (BPG is a mix of US and Canadian officers.). BPG has the following goals:

    o to share maritime surveillance and intelligence

    o to coordinate binational actions involving military and civil agencies

    o to design and conduct joint training programs and exercises.

    This means of naval and military integration came about in 2002 through the Canadian Department of National Defence (DND) who initiated the BPG. The BPG’s singular mission is to develop an “Enhanced Canada-US Security Cooperation with the US . . . starting with a simple two-year mandate, the BPG works alongside NorthCom and NORAD in Colorado Springs.”

    It won’t be long before the militaries of these alleged sovereign states are fully integrated. In sum, here’s what Americans who’ve just moved to Canada have to look forward to:

    “And ultimately what is at stake is that beneath the rhetoric, Canada will cease to function as a Nation . . .

    Posted in Uncategorized | Leave a comment

    The Importance of BSIT Courses in Career

    A Bachelor of Science in Information Technology or BSIT is a bachelor’s degree offered to students who want to pursue an undergraduate course in information technology. Students, who want to pursue a career in information technology, can opt for this course. Depending upon the country, from where the course is being done, BSIT courses have varying time to get completed. The degree basically focuses on subjects like databases, software and networking in general. This degree primarily stresses on specific technologies, unlike mathematical and theoretical foundations focused in a computer science degree. This degree is awarded when one completes his studies in the domains of development, software testing, web design, databases, programming, computer networking, software development and computer systems. The graduates accomplishing this degree are proficient in technology related tasks like storing, processing and communication of information between various devices like smart phones, computers and other electronic appliances. This course is also about the secure management of huge amount of data that are accessible by a wide variety of systems, placed both locally and internationally.

    To have a proper understanding of the BSIT courses, we should understand that this course is all about development, design, implementation, support and management of information system with the use of software applications and computer hardware to protect, transmit, process, store, convert and securely retrieve information. In other words it is an exhaustive course that involves everything about studying computing technology, installing applications to designing some of the toughest computer networks and information databases.

    BSIT in Detail

    The basic eligibility criterion for enrolling in this course is 10+2 in the Science stream, with Maths, Physics and Chemistry as the main subjects. One should have secured 50% minimum marks in Maths from a recognized board of the country. Some of the reputed colleges conduct an online entrance examination to take admission to their BSIT courses. The final selection depends upon the total marks aggregated in the final 10+2 exams and the entrance examination. The candidates, other than scoring the stipulated percentage, should have good communication skills and the ability to dabble through a variety of things. Their understanding of mathematical concepts should be high and they should be easily able to manipulate data and have multiple step logic. Inclination towards details and a strong mental focus are some of the other criteria. People, who are already having a Master ‘s degree but want to pursue further studies in information technology, can also opt for this course.

    Some colleges in India, who offer this degree course, are Anna University in Chennai, Assam Professional Academy in Chennai, Coxtan College in Dhanbad and Apar India Institute of Management and Technology.

    People who hacve completed acquiring this degree can find their coveted jobs in the areas of Aerospace, Power, Electronics and Communication, Post and Telegraph Department, All India Radio, IT industry, Civil Aviation Department, Internet Technologies, Defense Services, Hindustan Aeronautics Limited and a lot more. They can work as an application programmer, system designer, system analyst, quality analyst, online editor, system administrator, enterprise information officer, maintenance technician, computer support specialist, legal information specialist, database administrator, strategic information planner and much more coveted occupations.

    How is BSIT course beneficial?

    Now we will learn how enrolling in the BSIT courses, helps a professional.

    • This degree is the foundation for further studies like M.Sc, Ph.D and M.Phil degree in Information Technology. After successfully completing all these degrees, one can apply for a lecturership in colleges or universities.

    • The degree holders in information technology, have various opportunities in sectors like electronics, communication, IT, manufacturing, business, finance, banking, entertainment, education, defense, police, railways and a plethora of other options.

    • This degree has helped many students in knowing more about IT in detail, as a result of which the IT industry in India has earned the distinction of becoming the global ambassador.

    • This degree has helped in witnessing a growth of IT related jobs in India, by about 5%.

    • Students with this degree, often get very lucrative job offers from the reputed companies not only in India, but also from outside.

    Career and BSIT Course

    Information technology is an incredible varied field and definitely does not limit itself to fixing laptops or computers. The various IT jobs nowadays, require training, skill and a logical thinking process. The most rewarding part of opting for this degree is getting acquainted with the practical aspects of working in an IT industry. As already discussed, there are various career choices that one can opt for, while he is in IT field. As technology is progressing day by day, the need for professionals in this field will also keep on increasing. The more the skill set, the more will be the demand. BSIT courses help in upgrading the skills and expertise. Majoring in this domain will help in getting an exposure to lots of opportunities and jobs. Who does not want to secure a dream job? Getting this degree will help in getting a good job with a good pay package. Yes, remuneration offered also matters when it comes to choosing jobs. One of the most positive factors in choosing information technology as a profession is the hefty pay package that is associated with it. Getting through BSIT degree, one can have an assurance that his starting salary will be much more compared to other jobs. Definitely in future, he can expect greater hikes and a wonderful life ahead.

    Source by Stratford University

    Posted in Uncategorized | Leave a comment

    The History of the Grumman Corporation: The Later Years

    Stretching its Long Island reach eastward, the Grumman Coro portion leased 4,000 acres of a United States Naval Air Facility in Calverton, which it designated its “Peconic River” plant. During the latter part of 1953, manufacturing/engineering and flight operations buildings, along with two runways stretching 7,000 and 10,000 feet, arose from the eastern expanse, thus overcoming the Bethpage shortcoming. Additional land acquisition expanded it to just under 7,000 acres.

    Business Aircraft:

    Following his previous strategy of offering a series of amphibious aircraft targeted at the private and commercial market with the Goose, the Widgeon, and the Mallard, Leroy Grumman made an impromptu decision to design a more modern, land-based turboprop counterpart in an effort to diverge beyond the traditional military market on which he had hitherto relied and avoid laying off otherwise unneeded, but experienced engineering staff.

    Market studies of, and feedback from, numerous, Fortune 500 companies indicated the need for such a corporate transport cruising at 350 mph and covering 1,800- to 2,200-mile sectors. Because of the speed advantage of the turbine, and the proven reliability of the Rolls Royce Dart engine, it was decided to optimize an airframe around it.

    Unlike its amphibious predecessors, it emerged as a low, straight-wing monoplane, of 78.4-foot span, with a conventional tail, powered by two, 2,210 shaft-horsepower Rolls Royce Dart 529-8X or -8E turboprops, and rested on a tricycle undercarriage. The aircraft, with a 63.9-foot length, sported large, circular windows and accommodated from ten in an executive interior to 24 in a high-density, airliner configuration, piloted by a crew of two.

    First taking to the skies from Bethpage on August 14, 1958 as the G-159 Gulfstream I, it was FAA certified on May 21 of the following year. At a 35,100-pound gross weight, it typically cruised at 334 mph and flew 1,865-mile stretches.

    Sinclair Oil, the first customer for the type, became representative of the many corporations which operated it for employee transport. Limited, third-level and commuter operations were undertaken by Bonanza, Golden West, and Zantop in the US, Wardair in Canada, and Cimber Air in Denmark, although, even at its maximum, 24-pasenger capacity, it suffered from higher seat-mile costs than its other, purposefully-designed turboprop regional competitors, such as the 40-seat Fokker F.27 Friendship.

    Fitted with an aft, port, 62-by-84-inch cargo door, it was flown by small package carriers DHL and Purolator. The US Navy also operated the type.

    Production ceased at 200.

    Although the Gulfstream I proved reliable, it could not remain competitive in the emerging business jet market, which was becoming defined by the higher speeds and lower block times of the Lockheed JetStar, North American Sabreliner, and Dassault Falcon.

    Using the basic fuselage of the G-1, Grumman designed a successor optimized for Mach 0.83 cruise speeds, 43,000-foot service ceilings, and transcontinental ranges against headwinds, producing an aircraft which featured the now-standard configuration for the corporate market, as well as that of the early low-capacity, short-range jetliners: low, swept wings; aft-mounted, nacelle-encased turbofans to reduce cabin noise, minimize asymmetric thrust tendencies during single engine-out conditions, and leave the wing unobstructed for maximum lift capability; and a high t-tail to eliminate engine flow interference with the horizontal stabilizer surfaces.

    Powered by two 11,400 thrust-pound Rolls Royce Spey Mk 511-8 turbofans, the aircraft, with a 79.11-foot overall length, sported 25-degree swept wings with a 68.10-foot span and rested on a dual-wheeled tricycle undercarriage. Designated the G-1159 Gulfstream II, it had a 65,500-pound gross weight and 3,292-mile range, which increased to 4,276 miles with wing tip fuel tanks.

    Program go-ahead, on May 5, 1965, preceded the first flight from Bethpage a year and a half later, on October 2, but the Long Island plant was only to witness the production of a handful of them. A new factory, located in Savannah, Georgia, and opened in 1967, became the exclusive domain of its assembly and a new, spin-off company was subsequently created to overcome the inexperience in nonmilitary design marketing.

    Like its turboprop predecessor, the Gulfstream II was operated by numerous, worldwide corporations, as well as the Coast Guard for staff transportation purposes. Two were modified by NASA to serve as Shuttle Training Aircraft (STA), simulating post-atmospheric Space Shuttle re-entry handling.

    Of the 256 G-IIs produced, 121 were manufactured by Grumman, 106 by Grumman American, and 29 by Gulfstream American.


    Having designed both civil and military aircraft for all three land, sea, and air operational realms, Grumman soon pitted its engineering talent against the ultimate one-space-transcending its involvement well beyond the atmospheric Shuttle Training Aircraft of the two modified Gulfstream IIs.

    One of nine manufacturers to submit written proposals for the originally designated “Lunar Excursion Module,” itself the third of the three integral Command, Service, and Lunar Module components of the Apollo moon mission, it was selected by NASA on November 7, 1962.

    The spacecraft, later shortened to “LM,” was intended to transport two astronauts from the lunar-orbiting Command-and-Service-Module unit to the surface and later return them, thus needing to serve as an atmosphereless transport, lifeline, surface habitat, and communication terminal in an uninhabited, never-before visited world which could not support the autonomous functioning of human beings. In the event of failure, there was no human or any other help on the lunar surface. The LM, therefore, had to operate flawlessly, yet was not, and could not have been, tested in the earth’s atmosphere. Its first moon landing was, in effect, its first real-condition test flight.

    Subdivided itself into two stages, the Lunar Module featured a lower, or descent, stage, which was powered by a 9,700 thrust-pound liquid propellant rocket, carried the Apollo Lunar Surface Experiment Package (ALSEP), and sported four extended, spindly-appearing legs for landing and weight distribution. Serving as the subsequent launch platform for the upper, or ascent, stage, it remained on the moon as a testament to man’s presence, while the ascent stage itself, powered by a 3,500 thrust-pound liquid propellant rocket engine-which itself provided power to four Marquardt reaction control thrusters–housed the dual-person, Mission Commander and Lunar Module Pilot crew, who were harnessed in a standing position before the controls and the instrument panel.

    Because of the reduction in gravity on the moon, the LM was able to employ light structures and what would have been thin, easily-crushable outer skins on earth, reducing the amount of thrust required to operate it and the weight carried by the integral, tri-component spacecraft which served as the bridge between the two worlds. Solar radiation and dust protection was ensured by an aluminum shield external wrapping and a second aluminized Mylar sheet.

    Dimensionally, it rose 22.11 feet high as an integrated, ascent stage, descent stage, and extended-leg unit and 14.11 feet wide at its widest, upper-stage point.

    Unlike Grumman’s other, massively-produced designs, the Lunar Module was painstakingly assembled, wrapped, and virtually coddled by human hands, one vehicle at a time. In all, the effort resulted in two Lunar Module Test Article simulators, ten Lunar Module Test Article modules, and 12 operational Lunar Modules, although, without the intermittent contract cancellation, that total would have increased to 15. Because of the handmade process and minor modifications of the later vehicles, their gross weights varied between 32,000 and 36,025 pounds.

    The first manned Lunar Module fight occurred on March 3, 1969 when it separated from the Command Module of Apollo 9 while in earth orbit, covering a 113-mile distance before jettisoning its descent stage and returning to rendezvous and dock with it.

    The now-famous fight–and the purpose for which it had been designed–however, took place on the July 16, 1969 Apollo 11mission, when Neil Armstrong, separating from the spacecraft five days later in LM-5 “Eagle,” linked earth-and humanity-with its moon for the first time since their creations, manually overriding the controls to avoid alighting in a crater and settling on the Sea of Tranquility. Descending the landing leg-attached ladder in his self-contained space suit, he proclaimed with his equally famous words, “One small step for man, one giant leap for mankind.”

    No greater gravity, despite the reduction of it on the lunar surface, ever rode on a Grumman design during this pivotal, planetary-transcendent moment and all the subsequent moon missions, and the spacecraft, which had made it possible, redefined the company-from an aviation to an aerospace concern.

    The Intruder and the Tomcat:

    Grumman’s last major aircraft returned it to its earthly-and military-roots.

    More than a decade before the Lunar Module had stirred the dust in the Sea of Tranquility, Grumman, along with eight other manufacturers, submitted proposals to fulfill both the Navy’s and the Marines’ requirements for an all-weather, long-range interdiction and close air support design with short take off and landing (STOL) capabilities to incorporate a manufacturer-designed weapons system and cruise at 500-knot speeds. Grumman’s proposal was the winning one.

    Although four development aircraft, designated A2F-1s, were ordered in March of 1959, followed by an order for an equal number a year later, the inaugural fight, occurring at Calverton on April 19, 1963, revealed handling deficiencies and problems with its Digital Integrated Attack Navigation Equipment, or DIANE.

    Considerable redevelopment resulted in the definitive A-6 Intruder. Powered by two, 8,500 thrust-pound, fuselage side-mounted J52-P-6 turbojets, the mid-, 25-degree-swept wing, low-altitude attack aircraft had a 54.9-foot overall length and 53-foot span. The upward-folding wings themselves, with a 529-square-foot area, featured a compound, leading edge sweep. Both the tail and tricycle undercarriage were conventional, but the nose wheel was equipped with a catapulting system for carrier operations. Up to 18,000 pounds of armament, attached to wing and fuselage centerline points, could be carried.

    With a 53,699-pound maximum take off weight, the Intruder could climb at 6,950 fpm and approached the transonic speed line of Mach 0.95 at 28,000 feet, although Mach 0.87 more closely approximated its standard cruise velocity. Range varied between 1,350 and 3,225 miles.

    Of the 474 A-6As produced up to December 28, 1970, those appearing after 1965 were powered by uprated, 9,300 thrust-pound J52-P-8As or -Bs.

    The A-6E, introducing an AN/ASQ-133 solid state digital computer and AN/APQ-148 multi-mode radar, succeeded the A-6A as the standard production version with the 483rd aircraft and first flew in 1970.

    Instrumental in the Vietnam War, the Intruder was able to deliver heavy loads during poor weather conditions, and was an integral part of US Navy carrier fleets plying the Bering Sea, the Atlantic, the North Sea, the Mediterranean, the Indian Ocean, and the Pacific. It partook of the Libyan conflict.

    A stretched fuselage counterpart, with a crew of four and designated the EA-6 Prowler, was optimized for tactical electronic warfare.

    In order to maintain its core, carrier-based fighter design purpose, make up for the small number of F11F Tigers ordered, and recover from other design competition losses to McDonnell and Vought, Grumman set its sights, along with its funding, on a new, state-of-the-art, all-weather, air superiority fighter to incorporate tandem seating, variable-geometry wings, dual powerplants, an AN/AWG-9 track-while-scan radar, Sparrow semi-active radar-honing missiles, Phoenix long-range missiles, Sidewinder heat-seeking missiles, an internal cannon, and supersonic, beyond-Mach 2 speeds.

    Of the design proposals submitted by General Dynamics, Lockheed, LTV, McDonnell, North American, and Grumman, Grumman itself was awarded a research, development, test, and evaluation contract on February 3, 1969.

    Employing steel and titanium construction, the resultant F-14 Tomcat was powered by two 12,350 thrust-pound Pratt and Whitney TF30-P-412A below-wing mounted engines, whose power output increased to 20,900 pounds of thrust with afterburner deployment, and featured variable-geometry, swing-wings. The latter, automatically configured according to speed and flight phase, were equipped with glove vanes, slats, and flaps, and varied between a 20-degree sweepback (with a corresponding 64-foot, 1.5-inch span) and 68-degree trans- and supersonic-speed sweepback (and 38-foot, 2.5-inch span). Manual reconfiguration, to 75 degrees, minimized carrier stowage space requirements. Sporting twin vertical tails, again to reduce storage space requirements by decreasing its overall height, it became the first production aircraft to incorporate boron-epoxy composites in its horizontal stabilizer skins.

    First flying on December 21, 1970 from Calverton, the sleek Tomcat, with a 62.8-foot length, had a 74,349-pound gross weight and superlative performance, climbing at 32,500 fpm and cruising at 610 to 1,544 mph.

    The initial, F-14A version, with a 712-aircraft production run (including 80 for Iran), was succeeded by the F-14A+, which offered higher-thrust General Electric F110-400 engines and other modifications, and the F-14D, which introduced a glass cockpit and a digital avionics suite.

    Coupling its dogfighting capability with its long-range missile armament and radar, the Tomcat served in the Fleet Air Defense (FAD) and Deck-Launched Intercept (DLI) roles, becoming the Navy’s primary air superiority fighter and tactical reconnaissance aircraft from 1972 to 2006, having partaken of numerous missions, including those of the Gulf of Sidra and Operations Desert Shield, Desert Storm, Deliberate Force, Allied Force, Desert Fox, Enduring Freedom, and Iraqi Freedom.

    However, Navy-submitted proposals to further upgrade the F-14D were rejected by Congress in 1994, since it elected to replace the type with the F/A-18E/F Super Hornet instead. As the last major defense contract-supported naval fighter, it also signaled the end of its more than six-decade independent reign.

    The aircraft’s last US combat mission occurred on February 6, 2006 when two F-14s landed on the USS Theodore Roosevelt, and its absolute last flight, albeit for ferry purposes, took place one month later, on October 4, when an F-14D flew from Oceana to Farmingdale, Long Island. Like a salmon returning to its origin to spawn, it ended its life on the very same soil on which it-and its Grumman creator-had begun.

    Northrop Grumman:

    The Tomcat air superiority fighter, with its advanced design, variable-geometry wings, and supersonic speed capability, signaled the company’s future, but ironically it was also symbolic of its demise. Its technology was never in question. Its financing was.

    An eight-lot, fixed-price procurement contract, providing cost guarantees to the Navy, initially ensured Grumman an 11-percent profit until the 134th aircraft was built. But inflation, ballooning from the three-percent incorporated in the contract, soon reached double-digit levels, resulting in a $2 million loss on every aircraft made and causing the company to skirt the fringes of bankruptcy. Its already-low price was accepted to win what was believed to have been an even lower bid for a comparable design by McDonnell-Douglas, and post-Vietnam war budget reductions prohibited additional-and profitable-A-6E Intruder orders, forcing it to rely on its loss-making F-14 program for a larger percentage of its revenue base. Both production of it, and consequently the personnel needed to assemble it in Calverton, was forcibly reduced.

    The company’s bank credit line was ultimately cut like a knife-spliced wire.

    Only after choking on the losses generated by the first five order lots was Grumman able to secure an annual, term-renegotiable contract, enabling it to obtain a revenue-infusing loan with Bankers Trust and an 80-Tomcat order from Iran.

    Although naval battles traditionally reserved for the sky were transferred to offices and boardrooms, and the government contemplated a Grumman-directed lawsuit, the F-14 program was once again able to return to profitability, albeit with overhead and payroll reductions. But, as its long line of designs had indicated, advancing technology-from bi- to monoplane, from straight to variable-geometry wings, from piston to turbojet, and from sub- to supersonic speeds-it could not carry it forever, and it had to keep abreast, if not ahead, of the curve if it wished to survive as a provider of military aircraft. The latest advance was stealth technology.

    Its two-and soon-to-be last-major programs, the A-6 Intruder and the F-14 Tomcat itself, were approaching the end of their cycles, and their cost flows reflected their production declines.

    Little revenue of any significance remained to support it as a long-term, financially viable company, nor did any major program appear on the horizon. Because of the ensuing interval, during which stealthy military designs such as the Lockheed F-117 and the Northrup B-2 were emerging from competitor production plants, Grumman could neither close the gap between it and these other manufacturers nor did it have the funding to do so.

    As a result of its own progressively more advanced military designs and solid Navy experience, it proved an attractive target to a suitor. When reduced procurement funding advocated defense industry consolidation, the Grumman Corporation, without any viable survival strategy, relented to the take over offer by Northrop and its stock was thus tendered to it on April 15, 1994, at which time it ceased to exist as an independent company after six and a half decades.

    Legacy’s Lesson:

    The seeds Leroy Randle Grumman had planted in Long Island soil-of conviction, beliefs, strengths, talents, and vision-firmly took root there, spreading across the region in the form of aircraft plants, ever-growing employment, suburban development, and economic contribution, and establishing aviation as its premier industry, before canvassing the globe with defense capability and victory.

    Toward the end, without choice, it was forced to diversify its product range, manufacturing an array of core-deviating, non-aviation items, such as solar panels, buses, windmills, and hydrofoils, with which it had little to no experience, resulting in both a loss of revenue and reputation. Ironically, one of its diversifications-that of the Gulfstream I and II-would have enabled it to establish itself in the emerging corporate aircraft market segment and would have injected it with much-needed success. Manufactured by a spun-off company, the turbofan-powered G-II was redeveloped into the most advanced, and successful, series of G-III, -IV, -V, and -650 business jets. This initial diversification strategy thus proved correct. The decision to rescind it, at this point in its still-vibrant history, did not.

    Paradoxically, the more it attempted to reinvent itself as a nonmilitary-dependent, non-aviation manufacturer in the early-1990s in order to survive, the more its last-ditch effort only accelerated and ensured its demise. And therein lies the lesson of the Grumman Corporation’s legacy.

    As long as it had remained true to the seeds Leroy Grumman himself had planted, it flourished and grew. When it had attempted to sprout something its seeds could not produce, it had withered and failed. In short, being what you are by using, as in the case of Grumman, your strengths and innate talents to create robust, mostly-carrier-borne naval aircraft to earn the nickname of “Iron Works,” is natural and breeds success. Attempting to be what you are not, despite a survival mode implementation of non-aviation core diversification strategies, is unnatural and breeds failure.

    The same principle applies to people.

    Posted in Uncategorized | Leave a comment

    Video Game Designer and Other Hot Jobs For 2010 and Beyond

    Many kids today spend thousands of hours playing video games. Progressing from GameBoys(TM), to GameCubes(TM), Wii(TM) and XBox(TM), to the final frontier of the Massive Multiplayer Online games, it looks to have taken over their life. Is there a way to capitalize on this obsession and find a career that is equally (or at least almost) as appealing? Take up the quest and discover a brave new world of future jobs that video game geeks might really love.

    Video Game Designer

    First and foremost, of course, is the job of creating video games themselves. The facts are these: the video game industry has surpassed movies in terms of the dollar value generated. Amazing! Not only is this industry continuing to expand, but it also appears to be relatively recession proof. What does it take to have a career in video game creation? First, you need to understand that there is not just one kind of video game job out there, but many. It takes a team of people with multiple talents to pull off the creation of an exciting new game. The jobs and skills involved include artists

    – storyboard artist, character artist, animator, texture artists, programmers, including sound and audio engineers, game designers

    – character design, level design, overall design, game producers –

    project management skills, and writers

    – software documentation, storytelling, character scripts.

    There has been a recent explosion in colleges, technical institutes and certificate programs to help you get the skills you need to get into this industry. While I was researching this industry, though, I found out that there are a lot of other up and coming careers that use a lot of the same skills and may be equally appealing.

    Simulation Engineer

    Just as the job of the video game designer and programmer is to make a convincing virtual world, there is a “real life” application of this for a number of industries- simulations. At one college they talked about how some students were using electrical impulses from the heart to render 3 dimensional images of the heart in real time to help doctors examine what could be wrong. At RPI they are working on creating a simulation of a surgical procedure for doctors to get practice before they cut someone open! Beyond medical applications, simulations are gaining ground in aerospace, automotive engineering, and many other industries. Of course, the military has been interested in simulation for a very long time. Some of the college video game degree programs are recognizing this trend and have added “simulation” into the title of their program (for example, Daniel Webster Colleges’ “Gaming, Simulation and Robotics” BS degree or the University of Baltimore’s “Simulation and Digital Entertainment” degree). Scientists, engineers, defense planners and educators all use simulation to facilitate their jobs.

    Mobile Application Developer

    Another huge career trend is the explosion of applications for all kinds of other devices. Just look at the iPhone or the iTouch. What can’t these things do? Creating content, software applications and who knows what else for all kinds of mobile devices is hot. So “Mobile Application Developer” is a career that is here now and will expand in the future. This is also a career that colleges and universities are already recognizing. Rensselaer Polytechnic Institute offers a degree in “Electronic Media Arts and Communication”; Worcester Polytechnic Institute offers “Interactive Media and Game Development”. Rochester Institute of Technology (RIT) has a BS in “New Media and Interactive Development”; Champlain College offers a degree in “Software Engineering for Emergent Technologies”.

    Computer Forensics and Cyber Security

    The other really hot area highlighted at colleges like RIT is computer forensics and cyber security specialists. You only have to listen to the latest news about hackers stealing 135 million credit card numbers to see why these are hot job areas. Computer forensic specialists work with criminal justice folks to dig into all the computerized records and data associated with criminal cases. We are only going to get more connected in the future, making us even bigger targets. If you like figuring out how to get around what someone else made, why not consider putting your talents to good use for everyone’s benefit? DePaul University offers a degree in “Information Assurance and Security Engineer”; RIT has “Information Security and Forensics”, while Champlain college offers “Computer and Digital Forensics”.

    These careers are real, growing, and educational resources are available now! If you are a gamer, or if you know a gamer, there are cool careers to get into. The US Department of Labor predicts that IT jobs will be among the fastest growing and highest paying over the next decade. These jobs will be critical to the nation’s economy and security, and offer the highest entry level salaries with a bachelor’s degree of any out there. Check them out, and look into the schools and programs that most excite you.

    Posted in Uncategorized | Leave a comment

    The UAV Market Takes Off Amid Rising Military Applications

    A UAV (Unmanned Aerial Vehicles) is a small pilotless aircraft, which is either controlled by a remote or an app. The global unmanned aerial vehicles (UAVs) market is poised to register a CAGR of 9.27%, during 2018-2023 (the forecast period) as per a report by a market intelligence firm.

    Military expenditure is the primary driving factor of the global UAV market. UAVs have the capability of reducing collateral damage while hovering, searching, identifying, and striking targets, which makes them an asset for the military.

    These vehicles use aerodynamic forces to navigate and perform desired functions. Drones can reach, travel, and traverse areas and facilitate ease of operations in areas where it is arduous for humans to maneuver. They are used to carry small payloads, perform delivery and minor services, carry video and static cameras for photography and videography, and perform commercial and military surveillance and operations.

    Segmentation by application:

    Based on the application, the military UAV segment accounts for more than 80% of the market. The ability of these UAVs, to aid in ISR missions, aerial surveillance, and tactical operations, is accelerating its adoption. However, the commercial & civilian segment is projected to register the highest CAGR according to the market intelligence firm.

    Segmentation by class:

    Based on the class, the small UAV segment is anticipated to grow at the highest CAGR during the forecast period. The demand for small UAVs is constantly increasing, owing to opportunities in commercial applications as well as their potential battlefield usage.

    Asia-Pacific is expected to register the highest CAGR during 2018-2023 (the forecast period). Also, North America dominates the global UAV market, owing to the increased UAV application in military, homeland security, and commercial areas.

    Recent developments include BAE Systems working on UAVs with stealth capabilities. These are based on a new concept that removes conventional moving parts to provide greater control as well as to reduce weight and maintenance costs.

    Entry of new players is getting difficult in the defense segment due to the uncompromising safety, and regulatory policies and requirements. Moreover, the market is highly competitive but consolidated with only a few vendors, who dominate the shares of the market. Tough there is a huge scope in the market the various factors effecting are so huge restricting new firms to enter the market.

    Some of the key market players profiled in the UAV (Unmanned Aerial Vehicle) Market include AeroVironment, Israel Aerospace Industries, 3DR, DJI, Textron, and SAAB.

    Posted in Uncategorized | Leave a comment

    Top 5 Tech Savvy Colleges

    We’ve had many students ask us what colleges are the best engineering and computer science schools. Using a weighted average of faculty resources, technology grants, class size, and student ratings, we have developed the following list to help guide students and parents in the admissions process. The list provides a concise summary of these top engineering and tech programs.

    (1) California Institute of Technology, Pasadena, CA

    “Cal Tech” is the top ranked tech savvy college. The school is packed with great professors and has a niche in the California technology industry, including Google and Yahoo. Cal Tech is highly recommended if you’re interested in entering Silicon Valley and have a knack for inventing new technologies. Their undergraduate program is one of the best for students interested in individual research projects with professors and Cal Tech is a key part of NASA’s Jet Propulsion Laboratory.

    Some key specialties: Natural Sciences, Biotechnology, Space Sciences

    Famous alumni: Gordon E. Moore, co-founder of Intel Corporation; Charles Francis Richter, creator of the Richter Magnitude Scale;

    SAT range for incoming students: 2200-2350

    Acceptance rate: 17%

    (2) Massachusetts Institute of Technology, Cambridge, MA

    MIT is a great school with unique resources for its students. If you’re interested in graduating with the famed “MIT” degree, and want to be immersed in high quality education, every day, then this is a great place to be. With more Nobel prizes than one could count in an hour, they define excellence in engineering. Their high alumni giving suggest generations of families are happy with their educational choice. MIT probably has the highest name recognition worldwide amongst scholars and students.

    Some key specialties: Artificial Intelligence, Aerospace Engineering

    Famous alumni: Col. Buzz Aldrin, NASA Astronaut; IM Pei, world-renowned designer and architect; Robert Metcalfe, inventor of Ethernet and founder, 3COM;

    SAT range for incoming students: 2070-2340

    Acceptance rate: 13%

    (3) Cornell University, Ithaca, NY

    Cornell is one of the few colleges with engineering research programs that allow students to work directly under renowned professors. With faculty like Bill Nye (the science guy) and Steven Squyres of the NASA Mars Rover program, you will have the opportunity to learn from the best. Be prepared to work harder at Cornell than you would at most colleges. Cornell’s unique engineering science facilities include newly built Duffield Hall, which represents the university’s next high-tech step.

    Some key specialties: Engineering Physics, Nanotechnology, Biomedical Sciences

    Famous alumni: Steven Squyres, principal science investigator for the Mars rovers; William F. Friedman, founder of the study of cybernetics;

    SAT range for incoming students: 1940-2240

    Acceptance rate: 24%

    (4) Carnegie Mellon University, Pittsburgh, PA

    Founded originally as “Carnegie Technical Schools” in 1900 by industrialist Andrew Carnegie, the school is primarily known for its science and research. Carnegie Mellon hosts the Software Engineering Institute (SEI), a federally funded research and development center sponsored by the U.S. Department of Defense and operated by Carnegie Mellon University. In addition, they host the Robotics Institute (RI), a division of the School of Computer Science. Overall, its solid reputation amongst scholars and education journals is a reason why students should look to this school.

    Some key specialties: Computer Science, Software Engineering

    Famous alumni: James Gosling, creator of the Java programming language; Andy Bechtolsheim, co-founder of Sun Microsystems; Vinod Khosla, billionaire venture capitalist and co-founder of Sun Microsystems;

    SAT range for incoming students: 1940-2235

    Acceptance rate: 34%

    (5) University of Texas, Austin, TX

    Considered to be one of the “Public Ivies” in America, the University of Texas has fantastic resources for their students. As a public university, it spends almost 50 percent of their engineering budget on sponsored research. UT’s Cockrell School of Engineering enrolled 67 new National Merit Scholars in 2006-2007, the university’s largest proportion of new National Merit Scholars. UT Austin enrolls the third highest National merit scholars nationally. Plus, Texas is a great state with awesome weather and friendly people. Definitely keep UT-Austin in mind when applying to college.

    Some key specialties: Petroleum Engineering, Computer Engineering

    Famous alumni: Michael Dell, Founder and CEO of Dell Computers; Rex Tillerson, Exxon Mobil Corp. chairman and CEO.

    SAT range for incoming students: 1680-2055

    Acceptance rate: 49%*

    *Note that this admissions rate will be affected by Texas’ top ten percent law, which guarantees graduating Texas high school seniors in the top 10% of their class admission to any public Texas university

    Overall, when applying to these colleges, it will help to have someone with admissions counseling experience. College counseling is an important part of getting into these top schools. We hope this information provides you with key data on top colleges and schools.

    Posted in Uncategorized | Leave a comment

    Women’s Leadership Lessons

    In 1979 Great Britain’s economy was bankrupt and the newly elected Prime Minister, Margaret Thatcher immediately set to work privatizing all nationalized industries such as aerospace firms, telephone firms, utilities, the National Freight Company and public housing which was sold to its tenants. She sold all these industries at favorable terms to promote private enterprise. Her aim was to reduce government power and promote the rights of individuals who would be property owners and pay a mortgage on their new properties.

    Labor unions were crippling Great Britain with their intimidation and strikes. Prime Minister Thatcher stood firm against the unions bringing the coal industries and the steel industries under control. Employers and their workforce had achieved the proper balance. It was no longer necessary for men to join the unions.

    Prime Minister Margaret Thatcher believed in putting her “faith in freedom, free markets, limited government and a strong national defense.”

    Early Family life and education of Margaret Roberts

    Margaret Hilda Roberts was born in Grantham, England to Alfred and Beatrice Roberts on Oct.13, 1925. As a child, Margaret Roberts learned about the business of balancing budgets in her parent’s, the Roberts grocery store in Grantham. Her family lived above the store and she and her older sister were raised to be truthful, to attend church, to help others and do charitable work in their close community. Margaret’s father, Alfred Roberts talked daily about Conservative politics in their home. In Grantham he was a councilor in local politics.

    Prime Minister Margaret Thatcher was Prime Minister for 3 terms, 11 and a half years. When she became Prime Minister her country was on the brink of financial disaster and with problems of law and order. Prime Minister Margaret Thatcher said at the time “Unless we change our ways and our direction, our greatness as a nation will soon be a footnote in the history books, a distant memory of an offshore island, lost in the mist of time like Camelot, remembered kindly for its noble past.”

    When she left office in 1990 her legacy was a sound economy with a society that was confident about its future.

    Leadership Principals: As a leader she believed in working with experts who shared her vision, her plan of action with a shared goal of repairing Great Britain’s economy. Prime Minister Margaret Thatcher was nicknamed the “Iron Lady” by the Soviets ” for the tough line she took against them”. As a leader she had strength, determination, honesty, integrity, and the courage of her convictions with a passionate belief in the right way to get her country back on track once again.

    Romayne Leader Frank, my mother, began her successful construction and management business from the ground up over 40 years ago on a shoe string budget. She began her Construction and Real Estate Management Company with her husband, my father Robert J Frank, MD.

    Leadership Principals: Romayne Leader Frank began the new company by surrounding herself with experts who knew the building business and real estate markets. She listened to her experts and had a vision, a plan of action for their new company. My mother had a gift for looking at land and seeing its potential. She had the courage of her convictions, honor, integrity, and morality that included a code of ethics and a clear and decisive passionate belief in the right way to conduct her business.

    My mother, Romayne Leader Frank, said “In the building business you want to use a licensed building contractor, who is experienced with references, insured, has workmen’s compensation, and is also bonded for time and money.” This means the job is to be completed for an established amount of money and to be completed by a certain date. In building you want a turnkey job, all inclusive from a knowledgeable and licensed contractor.

    Their legacy: My mother and father, Romayne Leader Frank and Robert J. Frank, M.D. constructed dozens of buildings from Oyster Point to J. Clyde Morris Boulevard, in Newport News, Virginia and in Hampton, Virginia, James City County, in Williamsburg, Virginia, and the Isle of Wight, VA. Bringing new businesses to the Cities where they constructed new buildings. They also brought many new jobs and more taxes into these Cities.

    Family life and education of Romayne Leader Frank

    Romayne Leader was born in Detroit, Michigan to Earl David Leader and Mary Chernick Leader on May 28, 1929. Romayne’s father, Earl D. Leader worked at the Ford Automobile Company assembling automobiles to put himself through law school. Once he had finishes law school he represented many businesses and ran for local political office. Romayne’s mother, Mary Chernick Leader worked as a secretary at the Boroughs Corporation, had a women’s stock club where she taught other women how to study companies and invest in the stock market with little money to develop a nest egg, and helped the Braille Society. Romayne’s parents taught her the value of truth, honesty, hard work, religious faith and service to others. Her parents discussed the local politics with her and taught her about business and finance.

    Prime Minister Margaret Thatcher and Business women, Romayne Leader Frank were chartering new territory in male dominated areas of government and business. Both went to law school after being married at a time when women did not go to law school or work outside the home when they had young children. Both understood the value of a law education in society to help serve others. They had the courage of their convictions as leaders to never give up regardless of the obstacles. With character, honor, integrity, and hard work they accomplished their goals to serve others. As women leaders they left a legacy for their countries and their children and grandchildren of economic growth and prosperity and the right and honorable way to conduct business.

    What are the five things you can do to be a world class leader like Prime Minister Margaret Thatcher and business woman, Romayne Leader Frank?

    1) Be a leader who has the courage of their convictions, the honor, integrity, and morality that includes a code of ethics and a clear and decisive passionate belief in the right way to conduct business or run your country.

    2) As the leader of your country, organization or business surrounds yourself with a “Brain Trust”, competent people who know your business and markets, are of like mind, and have a strong firm intellectual capacity to help you accomplish your goals.

    3) Go around the table and listen to your experts carefully. Then make informed decisions.

    4) Once the Leader of the country, business, or organization CEO knows all the problems to be faced and has studied them from every angle, the Leader needs to have a vision, a plan of action with the goal to be accomplished. Leaders need to have a strong belief in service to others.

    5) In the building business you want to use a building contractor, who is experienced with references, licensed, insured, has workmen’s compensation, and is also bonded for time and money. This means the job is to be completed for an established amount of money and to be completed by a certain date. In building you want a turnkey job, all inclusive from a knowledgeable contractor. Remember the idea is to keep costs down with excellent workmanship from a licensed, insured contractor with a turnkey job and no change orders permitted.

    Posted in Uncategorized | Leave a comment

    Bangalore Welcomes You With Unsurpassed Fresher Job Opportunities

    Bangalore, now known as Bengaluru, has a viable economy, which is represented by textile, IT, space technology, aviation, biotechnology and earth moving industry. If you are a fresher and looking for a job, then you should explore the industrial base of the city as it houses some of the leading companies.

    Let’s explore the industrial base where you can find a job in Bangalore-

    1) IT sector= In the present scenario, the IT sector of the city is producing over 2 lakh jobs in Bangalore. Housing some of the top-notch IT companies, Bangalore enjoys the presence of IT stalwarts. The IT field of the city can be further divided under two below heads=

    (A) Electronics City

    (B) Whitefield

    Some of the top IT companies offering jobs in Bangalore are-

    1= Accenture

    2= Intel

    3= Cadence

    4= Deloitte

    5= Capgemini

    6= Sony

    7= Nokia

    8= 3M

    9= Nortel Networks

    10= iGATE

    11= SAP

    12= Dell

    13= Shell

    14= Aviva

    15= Tesco

    16= IBM

    17= Microsoft

    18= Yahoo

    19= Lenovo

    20= Samsung

    2) Biotechnology= Biotechnology has emerged as one of the fastest growing fields in the city. Headquartered in Bangalore, Biocon is India’s one of the highly reputed biotechnology companies which is offering abundant employment opportunities to people. Furthermore, the Indian Biotechnology Research Organization is taking all important steps in order to bolster biotechnology growth in the country. Advanta India is the other biotechnology company which is set up in Bangalore.

    3) Aerospace & aviation sector= Known as the aviation capital of India, Bangalore enjoys the presence of key aviation companies. Leading names like Boeing, Airbus, Goodrich, Dynamatics, Honeywell,GE Aviation, UTL, others have set up their research & development and various engineering centers in the city. Similarly, the Hindustan Aeronautics Limited (HAL) is also situated in the city. With an employee strength of over 10,000 employees, aviation is one of the major public sectors in Bangalore. In the current scenario, HAL is manufacturing various fighter aircraft for the Indian defense.

    Headquartered in Bangalore, the National Aerospace Laboratories (NAL) is working towards developing various civil aviation technologies.

    Space technology= In 1972, the Indian government has also set up the Space Commission and Department of Space (DOS). Working under the aegis of DOS, India’s space research organization (ISRO), has its headquarter situated in the city. The objective of the ISRO is to formulate various satellites and launch vehicles.

    4) Manufacturing sector= Similarly, there are other heavy industries like Bharat Electronics Limited, Bharat Heavy Electrical Limited (BHEL), Indian Telephone Industries (ITI), Bharat Earth Movers Limited (BEML), HMT (formerly Hindustan Machine Tools), Hindustan Motors (HM) and ABB Group, which are also producing a good number of jobs in Bangalore. Moreover, the city has become a hub for the automotive sector. Toyota has also formed its manufacturing plant in the city, followed by Hindustan Motors and Volvo Trucks.

    The salary structure

    Bangalore is considered as one of the best paying Indian cities. Here companies in Bangalore offer both good salary packages along with a decent career progression. If you want to start your career as the Software Engineer, then you can expect to earn Rs 4,37,856 per annum. Similarly, the Software Developer in the IT sector is drawing Rs 391609 per annum. The SAP Consultant is earning Rs 610,323 per annum along with bonuses.

    Posted in Uncategorized | Leave a comment