Inert America: Crossroads to the Future

Home > Other > Inert America: Crossroads to the Future > Page 22
Inert America: Crossroads to the Future Page 22

by Gary Griffin


  Systems designed to support education under NCLB must be driven from the top down by policy, but they must be designed and built from the bottom up. Education does not take place within administrative offices at a state or district level; education takes place in the classroom. The teacher teaches and the student learns. For this educational delivery model to work within an information society paradigm, information must drive the process. Therefore, teachers need a system in the classroom that is integrated and cross functional in order to get a complete and accurate picture of what’s going on with each student at any given point in time. This type of system must provide a comprehensive integrated view of all functional areas as they relate to the education process in the classroom.

  A typical hierarchical organizational view is represented for an education enterprise within a state. Within this type of enterprise, systems are typically designed by using a top down approach that is based on specialization—one system to perform one function. A system to support lesson planning is an example. The problem with this approach is that by the time it gets to the bottom of the hierarchy—the classroom—it forces users to jump from one system to the next just to perform the basic functions necessary for the delivery of education services. The system is too far removed from the people it’s designed to serve. Moreover, these types of systems generally only serve to create additional islands of data. Continuing to design systems with this approach under NCLB only propagates the problems identified above.

  The system is designed and built from the bottom up. Simply stated, this type of system starts where the action is—education in the classroom. The system is designed to provide an integrated view across multiple functional areas by providing educators with a sophisticated toolset that supports education delivery under an information society paradigm. In short, it is a framework with a broad foundation on which an education enterprise can build on. This approach has a number of vantage points.

  Tying teachers to the data collection process in the classroom instills quality control processes at the point of collection. In doing so, data collection becomes a by-product of the education process rather than a separate process.

  Systems integration is accomplished by integrating multiple functional areas into one application. This will result in a consolidation of functions and applications reducing the number of systems and islands of data.

  Data standardization across systems ceases to be an issue because there is only one system.

  Information is easily shared within the enterprise and across multiple functional areas getting everyone on the same page about each child’s education.

  The most significant benefit of using the bottom up approach to designing systems to support education under the information society paradigm is cost. System design, development, and implementation involve huge costs that generally make them prohibitive for many within the education enterprise. Two economic principles must be followed that will serve to drive down costs, thereby making it affordable to all.

  First, it embodies the Ford principle of the assembly line. The education system of the twenty-first century must establish an education assembly line that will allow future development to be accomplished within days. It provides a framework that everyone is working from. The development of new modules that support new policy changes could occur almost instantaneously. The cost of production would drop drastically.

  A second principle is aptly termed the Wal-mart principle. People shop at Wal-mart because of value and convenience—low prices and one-stop shopping. The systems of the twenty-first century education must provide teachers and administrators with value and convenience. All major functions are available in one place—one application. Rather than having to use multiple systems/ applications to accomplish education delivery in the classroom, the tasks are reduced to a single function and application. This type of system makes it very convenient for users, and it also provides a great deal of value. The system must allow public education to consolidate functions and applications. This consolidation will free up existing resources in terms of money and people. Personnel who currently have to support multiple applications and multiple functions now only have to support one. This will free time to do other things to support the education process. Additionally, buckets of monies currently allotted to pay and support multiple applications can be reallocated to other needs.

  Lowering the cost of ownership for information systems is paramount for the education enterprise to comply with NCLB. Both of these principles will enable public education to control cost and make meeting the demands of NCLB affordable.

  Still not convinced? For those who still argue for the top-down approach, this point can be conceded if they agree that the proper organizational view is one where the organizational hierarchy is inversed and the classroom is now at the top of the hierarchy. In fact, this is the proper view of the education enterprise, and it is also consistent with the information society paradigm. In this view, administrative functions become more support functions with a service type of role. As stated previously, the information society is one that is heavily dependent on services. This too should be reflected in the education enterprise by modeling the organizational view in this fashion. Either way, the same is accomplished with emphasis for the delivery of education services being put back into the classroom.

  The basic premise behind education is that assessment is what a child does not know, achievement is what a child does know, and accountability is the factors that influence each of these areas. While it sounds simple, the actual implementation is not simple. It involves a comprehensive strategy for teachers in the classroom that covers all the major focal points of NCLB. In order to successfully support education delivery in the classroom, teachers must have a closed-loop system that provides them with constant feedback and is designed to support continuous improvement in the education process.

  The closed loop system involves the following main elements:

  High quality data is fed into the system through the core components of the learning transaction in the classroom that includes standards, assessments, instruction, and curriculum.

  The high quality data representing the facts of the people, places, and processes involved in the learning transaction can then be used to provide meaningful information about those same people, places, and processes through analysis and reporting of the information to key stakeholders and decision makers.

  Training must be provided to teachers and other decision makers on how to utilize the information in order to make better decisions about the education process.

  By improving the quality of the decisions, leadership, strategy, and planning can result to help drive the education process.

  Focusing resources based on the information directly influences the quality of the learning transaction and improves the education for each student throughout the entire process.

  The results of the education process are fed back into the data through the learning transaction.

  The least well-defined part of the model is around school improvement and adequate yearly progress as defined within NCLB to measure student achievement. The term school improvement is misused in this sense because the focus of the accountability model is on student performance and not school performance. When certain targets are not met, it’s not the schools that have to improve in their performance. It’s the students’ performance that has to improve. With the reauthorization of NCLB or some other similar legislation, one of the major changes likely to occur is acceptance of growth-based accountability models. Even if the reauthorization of NCLB includes the ability to use growth models to measure student performance, this still will not address the needs of teachers in the classrooms. That is to say, teachers do not need growth models—they need growth-optimization models. Teachers need to pinpoint deficient areas in learning against standards, so that they can then move resources and implement strategies that address those deficient areas for each child.

 
While the definition of the growth model cannot be absolutely determined at this time, there are certain things that are knowable about them that also make it easier to project what a growth optimization model might look like. The following facts are currently known about growth models:

  They are heavily dependent on data and data quality.

  They measure learning based on standards.

  They are time dependent, and there must be two points in time that are defined as the beginning point and the ending point.

  To create a growth optimization model, one must include not only these facts, but one must also utilize advanced predictive analytics that require historical data that would be available only through a longitudinal data system.

  With the increased focus on student performance and growth models under NCLB, it is not surprising to see more education agencies attempting to design systems to support performance management. However, many agencies mistakenly see performance management as an activity that takes place after the fact. Performance management, when utilized as it is meant to be, should provide education agencies with a clear picture of where the education enterprise is at any given point in time in meeting the goals and objectives of the organization and thereby meeting the requirements of NCLB. That is to say, performance management is about monitoring performance across the enterprise each day; when specific areas are not functioning as they should be, then resources can be allocated to address the problem that year before it becomes a statistic reported about last year’s performance. It is not about reporting results from last year— performance management is not a report card. Last year’s results are history, and therefore cannot be changed.

  To accomplish performance management across the enterprise, systems must be designed and integrated so that those who are doing the monitoring can immediately identify problem areas. Once identified, full investigation into the causes of the problem necessitates the use of longitudinal data that covers multiple years of history. In short, a true performance management system must integrate operations data and historical data to provide the tools necessary to accomplish the intent of the system. Both types of data provide the foundation for performance management across the education enterprise. Today’s problems can be fixed today; they don’t have to become last year’s embarrassment.

  ENERGY: THE POWER BROKER OF THE TWENTY-FIRST CENTURY

  The reengineering of processes that facilitate the redirection of energy and energy resources is what is required for America in the twenty-first century. That America needs energy, and more of it, is obvious. The reasons may not be quite so obvious. As described previously, power is the rate at which work is performed or energy is converted. Energy is a scalar physical quantity that describes the amount of work that can be performed by a force; it is an attribute of objects and systems that is subject to a conservation law. Any form of energy can be transformed into another form, although there are often limits to the efficiency of the conversion from thermal energy to other forms of energy because of the second law of thermodynamics.167 As an example, when oil is reacted with oxygen, potential energy is released. The concept of force is used to describe an influence that causes an object to undergo acceleration. Inertia is the resistance of any physical object to a change in its state of motion. It is represented numerically by an object’s mass. The principle of inertia is one of the fundamental principles of classical physics, which are used to describe the motion of matter and how it is affected by applied forces. To be inert is to be in a state of doing little or nothing. As I stated at the beginning of this book, America has become inert. 168 Why does America need more energy, and how does this apply to the twenty-first century information society and knowledge-based economy?

  From its founding until the late 1700s, the United States was largely an agrarian country with abundant forests. During this period, energy consumption overwhelmingly focused on readily available firewood. Rapid industrialization of the economy, urbanization, and the growth of railroads led to increased use of coal, and by 1885, coal had eclipsed wood as the nation’s primary energy source. Coal remained the dominant fuel for the next seven decades. By 1950, coal was surpassed in turn by both petroleum and natural gas. While coal consumption today is the highest it has ever been, it is now mostly used to generate electricity. Natural gas, which burns more cleanly and is more easily transportable, has replaced coal as the preferred source of heating in homes, businesses, and industrial furnaces. Although total energy use increased dramatically during this period, by approximately a factor of fifty between 1850 and 2000, energy use per capita increased only by a factor of four.

  At the beginning of the twentieth century, petroleum was a minor resource used to manufacture lubricants and fuel for kerosene and oil lamps. One hundred years later it had become the preeminent energy source for the United States and the rest of the world. This rise closely paralleled the emergence of the automobile as a major force in American culture and the economy.169 While petroleum is also used as a source for plastics and other chemicals, and powers various industrial processes, today two-thirds of oil consumption in the United States is in the form of transportation fuels. Oil’s unique qualities for transportation fuels in terms of energy content, cost of production, and speed of refueling have made it difficult to supplant with technological alternatives developed so far.

  The United States is the largest energy consumer in terms of total use, using 100 quadrillion BTUs (105 exajoules, or 29 PWh) in 2005. This is three times the consumption by the United States in 1950. The United States ranks seventh in energy consumption per capita after Canada and a number of small countries. The majority of this energy is derived from fossil fuels: in 2005, it was estimated that 40 percent of the nation’s energy came from petroleum, 23 percent from coal, and 23 percent from natural gas. Nuclear power supplied 8.4 percent and renewable energy supplied 7.3 percent, which was mainly from hydroelectric dams although other renewable energy sources are included such as wind power, and geothermal and solar energy. Energy consumption has increased at a faster rate than energy production over the last fifty years in the United States when they were roughly equal. This difference is now largely met through imports.

  According to the Energy Information Administration’s statistics, the per capita energy consumption in the United States has been somewhat consistent from the 1970s until today. The average has been 335.9 million BTUs per person from 1980 to 2006. One explanation for this is that the energy required to produce the increase in U.S. consumption of manufactured equipment, cars, and other goods has been shifted to other countries producing and transporting those goods to the United States with a corresponding shift of greenhouse gases and pollution. In comparison, the world average has increased from 63.7 in 1980 to 72.4 million BTU’s per person in 2006.

  There have been economic and political problems associated with the country’s past dependence on foreign oil supply. America’s past consumption of petroleum has resulted in environmental problems as well. U.S. oil consumption is approximately 21 million barrels/day, yet domestic production is only 6 million barrels per day. The cost to import oil is approximately $630 billion dollars a year (at $115/barrel).170 While it costs oil companies operating in the Arabian Peninsula just one U.S. dollar to extract a barrel of oil, the cost on the world market has varied up to $100/barrel in 2007 dollars. While U.S. oil usage increases by 2 percent per year, the economy has been growing at 3.3 percent per year. The Strategic Petroleum Reserve currently holds about 720 millions of barrels of oil and is near capacity.171

  During the Carter administration, in response to an energy crisis and hostile Iranian and Soviet Union relations, President Jimmy Carter announced the Carter Doctrine, which declared that any interference with U.S. interests in the Persian Gulf would be considered an attack on U.S. vital interests. This doctrine was expanded by Ronald Reagan. This type of foreign policy around energy sources vital to the United States persists today.

  The United States ha
s and continues to get most of its electrical production from conventional thermal power plants. Most of these are coal; however, the 1990s and 2000s have seen a disproportionate increase in natural gas and other kinds of gas powered plants. From 1992 to 2005 some 270,000 megawatt electric (MWe) of new gas-fired plant were built but only 14,000 MWe of new nuclear and coal-fired capacity came on line, mostly coal, with 2,315 MWe of that being nuclear. Nuclear and coal are considerably more capital intensive when compared to gas, and the great shift to gas plant construction is often attributed to deregulation and other political and economic factors. U.S. wind power capacity now exceeds 18,302 MW, which is enough to serve 4.5 million average households. Several solar thermal power stations, including the new 64 MW Nevada Solar One, have also been built. The largest of these solar thermal power stations is the SEGS group of plants in the Mojave Desert, which have a total generating capacity of 354 MW, making the system the largest solar plant of any kind in the world. In 2007, summer demand for electricity was 783 GW and 640 GW for winter. By 2017, North American Electric Reliability Corporation (NERC) projects summer consumption to be 925 GW for summer and 756 GW for winter.

  Visible or embedded computers are found everywhere in the United States. In 1999 a study conducted by Mark. P. Mills of the Green Earth Society reported that computers consumed 13 percent of the entire U.S. supply. Numerous researchers questioned Mills’ methodology, and it was later demonstrated that he was off by an order of magnitude; for example, Lawrence Berkeley Labs concluded that the figure was nearer 3 percent of U.S. electricity use. Although Mills’ study was inaccurate, it helped drive the debate to the national level, and in 2006 the U.S. Senate started a study of the energy consumption of server farms.172 Server farms are typically located with the network switches and/or routers that enable communication between the different parts of the cluster and the users of the cluster. Server farms are increasingly being used instead of or in addition to mainframe computers by large enterprises, although server farms do not as yet reach the same reliability levels as mainframes. Because of the sheer number of computers in large server farms, the failure of individual machines is a common event, and the management of large server farms needs to take this into account, by providing support for redundancy, automatic failover, and rapid reconfiguration of the server cluster. The performance of the very largest server farms (thousands of processors and up) is typically limited by the performance of the data center’s cooling systems and the total electricity cost rather than by the performance of the processors. A computer that runs twenty-four/seven consumes (over its lifetime) electricity worth many times its initial purchase cost. For this reason, the critical design parameter for both large and continuous systems tends to be performance per watt, rather than cost of peak performance.

 

‹ Prev