1nc skills practice 2

Upload: nikpreet-singh

Post on 10-Jan-2016

220 views

Category:

Documents


0 download

DESCRIPTION

A practice negative constructive.

TRANSCRIPT

1NC Skills Practice 21NCClinton will win in 2016, but her victory hinges on close swing state electionsEdelman, 2/18 (Adam, Hillary Clinton would face close 2016 race in 2 swing states: poll, Daily News, http://www.nydailynews.com/news/politics/clinton-face-close-2016-race-2-swing-states-poll-article-1.2120186)Hillary Clinton could face a tough road to victory in a hypothetical 2016 general election in two key swing states, a new poll shows. A Quinnipiac poll published Wednesday shows that the former secretary of state and presumed Democratic front-runner for the 2016 nomination would barely squeak by her toughest hypothetical Republican challenger in the swing states of Virginia and Colorado. In Virginia, Clinton would tie former Florida Gov. George Bush in a prospective matchup 42% to 42%, and would only narrowly beat out four other prominent potential Republican candidates. Clinton would squeak by Kentucky Sen. Rand Paul 44% to 42% and would edge out former Arkansas Gov. Mike Huckabee 44% to 41% with both contests being well within the polls +/- 3 percentage-point margin of error. In the poll, Clinton doesnt fare much better against former New Jersey Gov. Chris Christie, who she would beat 44% to 39%, or Wisconsin Gov. Scott Walker, who shed defeat, 45% to 40%, either. In Colorado, meanwhile, Clinton would edge out Paul and Walker in a prospective matchup by only 2 percentage points 43% to 41% and 42% to 40%, respectively both also within the polls margin of error. Bush and New Jersey Gov. Chris Christie, however, were both beaten handily by Clinton in the poll in Colorado. Both swing states would be key in any candidates general election strategy. Obama won both states in both 2008 and 2012 by at least 4 percentage points, helping him to cruise to overwhelming victory in the general elections both years. The survey of more than 1,000 voters in each of the three states was conducted Feb. 5 to 15.Clinton is irrevocably tied to Obamas popularityits on the brink nowEnten, 04/15 (Harry, For Better Or Worse, Clinton Is Likely Stuck Running For Obamas Third Term, FiveThirtyEight, April 15, http://fivethirtyeight.com/datalab/for-better-or-worse-clinton-is-likely-stuck-running-for-obamas-third-term/)But all that debate and posturing are largely academic. Clinton is already tied to Obama and is likely to remain so. Remember when Clinton was Obamas secretary of state and her favorable ratings ran well ahead of his? That isnt the case anymore. Check out this chart of YouGov data since the beginning of 2013: [Graph omitted.] Clintons favorable rating has fallen into line with Obamas. The latest local regression estimate puts Clintons favorable rating at 48.9 percent and Obamas at 48.5 percent. And that understates the relationship. Even when Clintons favorable rating was stronger than Obamas, the two generally fell and rose at the same time. The correlation between the two has been 0.79. Now the relationship is even stronger. YouGov breaks down the favorability ratings by age, gender, income, ideology, party identification, race and region. Over the past two weeks, Clintons favorable rating in each of these groups mirrors Obamas. [Graph omitted.] The average absolute difference between Clintons and Obamas favorable ratings for the groups is just 2.6 percentage points; the median is just 2 percentage points. Keep in mind, all these differences are within the margin of error. The correlation between Clintons and Obamas favorable rating in these 23 demographic groups is 0.99. Clinton simply doesnt seem to have a different base of support from Obama. And thats par for the course historically. Its difficult for a nominee of the same party as the sitting president to run away from that president. The favorability ratings of the nominee and the incumbent president rarely differ by more than a few percentage points. Lets go through all the elections in which a sitting president didnt run for re-election. Well start with 1960, the first year for which we have favorability ratings. On the eve of the 1960 presidential election, Gallup found that President Dwight Eisenhower and Vice President Richard Nixon (the GOPs nominee that year) two very different politicians had nearly equal favorability ratings: 83 percent of Americans gave Eisenhower a positive score on a 10-point scale, compared with 79 percent for Nixon. Eight years later, in 1968, President Lyndon Johnsons approval ratings were so low that he decided not to run for re-election, although he still did fairly well on personal popularity. By the end of the campaign, Vice President Hubert Humphrey, the Democratic nominee, earned a 61 on the thermometer test, a favorability scale from 0 to 100 in the American National Elections Studies. Johnson was nearby with a 58. In 1988, Republican nominee and Vice President George H.W. Bush was trying to take over for President Ronald Reagan. Reagan was fairly popular, and the economy was doing well. Both earned a 60 on the thermometer test. The 2000 election is perhaps the most interesting because Vice President Al Gore, the Democratic nominee, was critiqued for not embracing President Bill Clintons legacy. Clinton, of course, was still dealing with the fallout from certain dalliances. For all of Gores running away, his thermometer rating of 57 was just 1 point above Clintons 56. Republican John McCain in 2008 is the lone exception. His thermometer score was 52. President George W. Bushs was 40. Could Hillary Clinton pull off what McCain did? Its possible. Clinton had a well-formed public profile before Obama came on the scene; shes not running in Obamas shadow. In addition, we dont have a ton of data points here just a handful of elections. So we cant make any hard and fast rules. That said, McCain was an unusual case. He, unlike Clinton and everyone else on this list, did not serve in the incumbent presidents administration. McCain also had a considerably different ideological profile from Bush. Obamas and Clintons are similar. Chances are that the current polling will hold. Clintons fate will more or less be tied to Obamas popularity and the factors that determine it. Any difference will be small, though in a very close election perhaps a small difference is all shell need.Most voters support metadata collection NSA reform would be unpopularAgiesta, 6/1 (Jennifer, Poll: 6 in 10 back renewal of NSA data collection, CNN, 2015, http://www.cnn.com/2015/06/01/politics/poll-nsa-data-collection-cnn-orc/)Washington (CNN) Americans overwhelmingly want to see Congress renew the law enabling the government to collect data on the public's telephone calls in bulk, though they are split on whether allowing that law to expire increases the risk of terrorism in the U.S. With the provisions of the Patriot Act which allow the National Security Administration to collect data on Americans' phone calls newly expired, a new CNN/ORC poll finds 61% of Americans think the law ought to be renewed, including majorities across party lines, while 36% say it should not be reinstated. Republican leaders in the Senate are working to pass a bill to reinstate the law, after delays led by Sen. Rand Paul (R-Kentucky), whose presidential campaign has been noted for its appeal to independent voters and younger Republicans, and other surveillance opponents led to the law's expiration at 12:01 a.m. Monday. But Paul's stance on the issue is unlikely to bring him many fans within his own party. Support for renewal peaks among Republicans, 73% of whom back the law. Democrats largely agree, with 63% saying the law should be renewed. Independents are least apt to back it, with 55% saying renew it and 42% let it expire. Liberals, regardless of partisan affiliation, are most likely to say the law should not be renewed, 50% say so while 48% want to see it renewed. About half of Americans, 52%, say that if the law is not renewed, the risk of terrorism here in the U.S. would remain about the same. Still, a sizable 44% minority feel that without the law, the risk of terrorism will rise. Just 3% feel it would decrease. The sense that the risk will rise is greatest among Republicans, 61% of whom say the risk of terrorism will climb if the NSA is unable to collect this data. Among Democrats and independents, less than half feel the risk of terrorism would increase if the program ended. The poll reveals a steep generational divide on the data collection program. Among those under age 35, just 25% say the risk of terrorism would increase without NSA data collection. That more than doubles to 60% among those age 65 or older. Those under age 35 are also split on whether the law should be renewed at all, 50% say it should be renewed while 49% say it should not. Among those age 35 or older, 65% back renewal of the law.Republican in 2016 causes warming and great power warsKlare, Five Colleges Prof of Peace and World Security Studies @ Hampshire College, 2/13/2015 Michael, A Republican Neo-Imperial Vision for 2016 http://www.truthdig.com/report/item/keystone_xl_cold_war_20_and_the_gop_vision_for_2016_20150213This approach has been embraced by other senior Republican figures who see increased North American hydrocarbon output as the ideal response to Russian assertiveness. In other words, the two pillars of a new energy North Americanismenhanced collaboration with the big oil companies across the continent and reinvigorated Cold Warismare now being folded into a single Republican grand strategy. Nothing will prepare the West better to fight Russia or just about any other hostile power on the planet than the conversion of North America into a bastion of fossil fuel abundance. This strange, chilling vision of an American (and global) future was succinctly described by former Secretary of State Condoleezza Rice in a remarkable Washington Post op-ed in March 2014. She essentially called for North America to flood the global energy market, causing a plunge in oil prices and bankrupting the Russians. Putin is playing for the long haul, cleverly exploiting every opening he sees, she wrote, but Moscow is not immune from pressure. Putin and Co. require high oil and gas prices to finance their aggressive activities, and soon, North Americas bounty of oil and gas will swamp Moscows capacity. By authorizing the Keystone XL pipeline and championing natural gas exports, she asserted, Washington would signal that we intend to do exactly that. So now you know: approval of the Keystone XL pipeline isnt actually about jobs and the economy; its about battling Vladimir Putin, the Iranian mullahs, and Americas other adversaries. One of the ways we fight back, one of the ways we push back is we take control of our own energy destiny, said Senator Hoeven on January 7th, when introducing legislation to authorize construction of that pipeline. And that, it turns out, is just the beginning of the benefits that North Americanism will supposedly bring. Ultimately, the goals of this strategy are to perpetuate the dominance of fossil fuels in North Americas energy mix and to enlist Canada and Mexico in a U.S.-led drive to ensure the continued dominance of the West in key regions of the world. Stay tuned: youll be hearing a lot more about this ambitious strategy as the Republican presidential hopefuls begin making their campaign rounds. Keep in mind, though, that this is potentially dangerous stuff at every levelfrom the urge to ratchet up a conflict with Russia to the desire to produce and consume ever more North American fossil fuels (not exactly a surprising impulse given the Republicans heavy reliance on campaign contributions from Big Energy). In the coming months, the Obama administration and Hillary Clintons camp will, of course, attempt to counter this drive. Their efforts will, however, be undermined by their sympathy for many of its components. Obama, for instance, has boasted more than once of his success in increasing U.S. oil and gas production, while Clinton has repeatedly called for a more combative foreign policy. Nor has either of them yet come up with a grand strategy as seemingly broad and attractive as Republican North Americanism. If that plan is to be taken on seriously as the dangerous contrivance it is, it evidently will fall to others to do so. This Republican vision, after all, rests on the desire of giant oil companies to eliminate government regulation and bring the energy industries of Canada and Mexico under their corporate sway. Were this to happen, it would sabotage efforts to curb carbon emissions from fossil fuels in a major way, while undermining the sovereignty of Canada and Mexico. In the process, the natural environment would suffer horribly as regulatory constraints against hazardous drilling practices would be eroded in all three countries. Stepped-up drilling, hydrofracking, and tar sands production would also result in the increased diversion of water to energy production, reducing supplies for farming while increasing the risk that leaking drilling fluids will contaminate drinking water and aquifers. No less worrisome, the Republican strategy would result in a far more polarized and dangerous international environment, in which hopes for achieving any kind of peace in Ukraine, Syria, or elsewhere would disappear. The urge to convert North America into a unified garrison state under U.S. (energy) command would undoubtedly prompt similar initiatives abroad, with China moving ever closer to Russia and other blocs forming elsewhere. In addition, those who seek to use energy as a tool of coercion should not be surprised to discover that they are inviting its use by hostile partiesand in such conflicts the U.S. and its allies would not emerge unscathed. In other words, the shining Republican vision of a North American energy fortress will, in reality, prove to be a nightmare of environmental degradation and global conflict. Unfortunately, this may not be obvious by election season 2016, so watch out.1NCStrong surveillance programs have effectively suppressed terrorism now but the risk is still highLewis 14 (James Andrew Lewis is a senior fellow and director of the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C., where he writes on technology, security, and the international economy. This report is produced by the Center for Strategic and International Studies (CSIS), a private, taxexempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. Underestimating Risk in the Surveillance Debate, December 2014. http://csis.org/files/publication/141209_Lewis_UnderestimatingRisk_Web.pdf | prs)The phrase terrorism is overused, and the threat of terrorist attack is easily exaggerated, but that does not mean this threat it is nonexistent. Groups and individuals still plan to attack American citizens and the citizens of allied countries. The dilemma in assessing risk is that it is discontinuous. There can be long periods where no activity is apparent, only to have the apparent calm explode in an attack. The constant, low-level activity in planning and preparation in Western countries is not apparent to the public, nor is it easy to identify the moment that discontent turns into action. There is general agreement that as terrorists splinter into regional groups, the risk of attack increases. Certainly, the threat to Europe from militants returning from Syria points to increased risk for U.S. allies. The messy U.S. withdrawal from Iraq and (soon) Afghanistan contributes to an increase in risk.24 European authorities have increased surveillance and arrests of suspected militants as the Syrian conflict lures hundreds of Europeans. Spanish counterterrorism police say they have broken up more terrorist cells than in any other European country in the last three years.25 The chairman of the House Select Committee on Intelligence, who is better placed than most members of Congress to assess risk, said in June 2014 that the level of terrorist activity was higher than he had ever seen it.26 If the United States overreacted in response to September 11, it now risks overreacting to the leaks with potentially fatal consequences. A simple assessment of the risk of attack by jihadis would take into account a resurgent Taliban, the power of lslamist groups in North Africa, the continued existence of Shabaab in Somalia, and the appearance of a powerful new force, the Islamic State in Iraq and Syria (ISIS). Al Qaeda, previously the leading threat, has splintered into independent groups that make it a less coordinated force but more difficult target. On the positive side, the United States, working with allies and friends, appears to have contained or eliminated jihadi groups in Southeast Asia. Many of these groups seek to use adherents in Europe and the United States for manpower and funding. A Florida teenager was a suicide bomber in Syria and Al Shabaab has in the past drawn upon the Somali population in the United States. Hamas and Hezbollah have achieved quasi-statehood status, and Hamas has supporters in the United States. Iran, which supports the two groups, has advanced capabilities to launch attacks and routinely attacked U.S. forces in Iraq. The United Kingdom faces problems from several hundred potential terrorists within its large Pakistani population, and there are potential attackers in other Western European nations, including Germany, Spain, and the Scandinavian countries. France, with its large Muslim population faces the most serious challenge and is experiencing a wave of troubling anti-Semitic attacks that suggest both popular support for extremism and a decline in control by security forces. The chief difference between now and the situation before 9/11 is that all of these countries have put in place much more robust surveillance systems, nationally and in cooperation with others, including the United States, to detect and prevent potential attacks. Another difference is that the failure of U.S. efforts in Iraq and Afghanistan and the opportunities created by the Arab Spring have opened a new front for jihadi groups that makes their primary focus regional. Western targets still remain of interest, but are more likely to face attacks from domestic sympathizers. This could change if the well-resourced ISIS is frustrated in its efforts to establish a new Caliphate and turns its focus to the West. In addition, the al Qaeda affiliate in Yemen (al Qaeda in the Arabian Peninsula) continues to regularly plan attacks against U.S. targets. 27 The incidence of attacks in the United States or Europe is very low, but we do not have good data on the number of planned attacks that did not come to fruition. This includes not just attacks that were detected and stopped, but also attacks where the jihadis were discouraged and did not initiate an operation or press an attack to its conclusion because of operational difficulties. These attacks are the threat that mass surveillance was created to prevent. The needed reduction in public anti-terror measures without increasing the chances of successful attack is contingent upon maintaining the capability provided by communications surveillance to detect, predict, and prevent attacks. Our opponents have not given up; neither should weCurtailing surveillance systems drastically increases the probability of successful terrorist attacksLewis 14 (James Andrew Lewis is a senior fellow and director of the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C., where he writes on technology, security, and the international economy. This report is produced by the Center for Strategic and International Studies (CSIS), a private, taxexempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. Underestimating Risk in the Surveillance Debate, December 2014. http://csis.org/files/publication/141209_Lewis_UnderestimatingRisk_Web.pdf | prs)In one instance, an attack is detected and stopped before it could be executed. U.S. forces operating in Iraq discover a bomb-making factory. Biometric data found in this factory is correlated with data from other bombings to provide partial identification for several individuals who may be bomb-makers, none of whom are present in Iraq. In looking for these individuals, the United States receives information from another intelligence service that one of the bombers might be living in a neighboring Middle Eastern country. Using communications intercepts, the United States determines that the individual is working on a powerful new weapon. The United States is able to combine the communications intercept from the known bomb maker with information from other sourcesbattlefield data, information obtained by U.S. agents, collateral information from other nations intelligence servicesand use this to identify others in the bombers network, understand the plans for bombing, and identify the bombers target, a major city in the United States. This effort takes place over months and involves multiple intelligence, law enforcement, and military agencies, with more than a dozen individuals from these agencies collaborating to build up a picture of the bomb-maker and his planned attack. When the bomb-maker leaves the Middle East to carry out his attack, he is prevented from entering the United States. An analogy for how this works would be to take a 1,000-piece jigsaw puzzle, randomly select 200 pieces, and provide them to a team of analysts who, using incomplete data, must guess what the entire picture looks like. The likelihood of their success is determined by how much information they receive, how much time they have, and by experience and luck. Their guess can be tested by using a range of collection programs, including communications surveillance programs like the 215 metadata program. What is left out of this picture (and from most fictional portrayals of intelligence analysis) is the number of false leads the analysts must pursue, the number of dead ends they must walk down, and the tools they use to decide that something is a false lead or dead end. Police officers are familiar with how many leads in an investigation must be eliminated through legwork and query before an accurate picture emerges. Most leads are wrong, and much of the work is a process of elimination that eventually focuses in on the most probable threat. If real intelligence work were a film, it would be mostly boring. Where the metadata program contributes is in eliminating possible leads and suspects. This makes the critique of the 215 program like a critique of airbags in a caryou own a car for years, the airbags never deploy, so therefore they are useless and can be removed. The weakness in this argument is that discarding airbags would increase risk. How much risk would increase and whether other considerations outweigh this increased risk are fundamental problems for assessing surveillance programs. With the Section 215 program, Americans gave up a portion of their privacy in exchange for decreased risk. Eliminating 215 collection is like subtracting a few of the random pieces of the jigsaw puzzle. It decreases the chances that the analysts will be able to deduce what is actually going on and may increase the time it takes to do this. That means there is an increase in the risk of a successful attack. How much of an increase in risk is difficult to determine, but this is crucial for assessing the value of domestic surveillance programs. If the risk of attack is increasing, it is not the right time to change the measures the United States has put in place to deter another 9/11. If risk is decreasing, surveillance programs can be safely reduced or eliminated. A more complicated analysis would ask if the United States went too far after 9/11 and the measures it put in place can be reduced to a reasonable level without increasing risk. Unfortunately, precise metrics on risk and effectiveness do not exist, 12 and we are left with the conflicting opinions of intelligence officials and civil libertarians as to what makes effective intelligence or counterterrorism programs. There are biases on both sides, with intelligence officials usually preferring more information to less and civil libertarians can be prone to wishful thinking about terrorism and opponent intentions.13 Interviews with current and former intelligence officials give us some guidance in deciding this. The consensus among these individuals is that 215 is useful in preventing attacks, but the least useful of the programs available to the intelligence community. If there was one surveillance program they had to give up, it would be 215 before any others, but ending 215 would not come without some increase in risk. Foreign Surveillance and Espionage Domestic collection raises difficult constitutional issues. Foreign collection raises diplomatic issues, since espionage is a normal practice among states. NSA carried out two kinds of signals intelligence programs: mass surveillance to support the counterterrorism efforts of allies, particularly in Europe (with the participation of European governments, even if their publics were unaware of this cooperation); and targeted collection to support national security interests to the United States.14 This foreign intelligence activity was conducted under Section 702 of FISA and amendments, which provides NSA the authority to collect foreign communications. Mass surveillance serves counterterrorism purposes. It provides a means of identifying terrorists and their networks for targeted collection. The mass collection programs conducted by NSA were done in cooperative arrangements with European security services and those of other nations; that this was largely unknown to European populations and legislatures is more a reflection of the weaknesses of the oversight of their own intelligence activities by Europeans.15 The United States had unique advantages for counterterrorism on a global level. These include its relationships with a range of foreign intelligence partners, its technical resources, and the information it obtained from its operations in Iraq and Afghanistan. A captured cell phone can provide names and numbers that allow for more precise targeting. Using sophisticated software and combined with information from other intelligence services, NSA was able to take the billions of calls and emails sent each day and winnow them down to a few thousand messages to which an intelligence analyst actually listened. The nature of mass surveillance has been misrepresented, although this point is politically irrelevant. The chief error lies in the difference between collecting and exploitation. Millions of records were collected, very few were exploited. Sophisticated software programs sort through the records to find conversations linked to terrorism, or in some cases, to proliferation or espionage. Most conversations are boring. NSA devoted almost two decades to develop technologies that would let it sort automatically through millions of records to find those few conversations or messages of interest. The actual number of messages is staggering. One report estimates that there were more than 100 trillion messages on the Internet in 2010more than 30 billion every day (if we discount spam). There are more than 3 billion email accounts globally.16 There are more than 6 billion mobile phone users.17 The idea that NSA and its partner services in Europe and Asia listened to each message or phone is ludicrous, even if firmly embedded in the public consciousness. We do not know the full scope of success for these programs. The traditional explanation is that the failures of intelligence services will be on every front page but their successes will never be known. We do know that ending NSAs role in mass surveillance will not end mass surveillance in Europe or other parts of the world. What it will do is reduce cooperation among allied and friendly services and the United States and restrict access to extraregional counterterrorism data. Cooperation will not end, but will be less effective at a time when jihadi fighters are beginning to return from Syria. There will be an increase in the risk that a plot will go undetected, but how much risk will increase is difficult to estimate. While the mass surveillance programs supported counterterrorism and were done in cooperation with European services, other NSA intelligence collection programs were focused on nonproliferation, counterintelligence (including Russian covert influence efforts), and arms sales to China. Unfortunately, Europes record on these matters is open to question. Espionage does not occur in a vacuum but in a larger political and military context. U.S. leaders may have failed to exercise sufficient oversight over intelligence collection, but the objectives set for NSA reflect real problems. One central issue is whether the notion that friends dont spy on friends makes any sense. Germany might actually follow the rule that friends do not spy on friends, but that would make it practically unique in the world. A recent internal review by the United States ranked France, for example, as the fourth-most-active nation engaged in economic espionage against it (the others were, in order, China, Russia, and another ally). In the mid-1990s, the director of central intelligence traveled to Paris to warn that French espionage was reaching intolerable levels.18 There are other less public anecdotes that go back decades and make it clear that friends do indeed spy on friends. This does not mean that France (or the other ally) was not a friend. There is strong and essential cooperation with France on key security issues. It is a valued partner for the United States. What it means is that relations among powerful states are complex and not explained by simple-minded bromides drawn from personal life. Espionage is not the best way to engage with Germany on the ambivalent attitude of many of its citizens toward the United States or nostalgie de la boue [nostalgia] when it comes to Soviet Russia,19 but it is difficult to think of another former leader of a major power who is on Vladimir Putins payroll. German military sales to China could justify U.S. espionage. Germany is not alone among European nations in making such sales, but sales such as the transfer of quiet diesel submarine engines from a German firm to China is troubling. France and other European nations have also contributed advanced technology to Chinese military programs. Europe reluctantly maintains an embargo on arms sales to China because of the Tiananmen massacre; but the way around the embargo is to reclassify items as commercial, not military. Hence these are commercial submarine engines. The most startling thing about espionage against Germany is the complete absence of any strategic calculation by the United States about the risks and benefits. Germanys status and position in Europe and the world has changed. Germany is no longer a junior partner. The United States needs a closer partnership with Germany to achieve its foreign policy goals. Building this partnership will not be easyGerman foreign policy is inadequate in ways that Germans may not appreciatewhich is not surprising, since the Wilhelmine, Weimar, and PostWeimar precedents are unfortunate, and German policy could focus on commercial issues when Germany was part of a larger alliance led by the United States. Sending troops to support NATO in Afghanistan but tightly limiting their ability to fight, for example, suggests Germany has the means and the form of power, but has not yet defined how to use it. Both the United States and Germany would find it easier to achieve their foreign policy goals if they work together, but this will require them to redefine their partnership. Germanys traditional policy of letting commercial interests trump larger political and security concerns was tolerable in a junior partner; it is no longer an adequate basis for partnership. Similarly, the United States must reevaluate the risks of espionage against Germany. There are alternatives to espionage to gain information on Germanys relations with Russia, Iran, and China. If Germany rebuffed serious discussion or continued to place commercial gain above the security of an ally, espionage would be a justifiable. Public dialoguepublic because the audience must include the Germans public and not just traditional foreign policy elitescould help shape a new partnership with Germany appropriate for its status, but the United States forfeited a chance to test and perhaps build its relationship with Germany. The benefits of espionage are outweighed by the risk to the bilateral relationship and to U.S. influence in the world, but there is no evidence that this calculation of gain, loss, and risk was ever considered. The case for spying on Brazil is nonexistent. While Brazil is often antagonistic, it does not pose a risk to national security. Spying on Brazil reflects a failure of oversight that explains many of the excesses revealed by Snowden. If you ask an intelligence agency to get information, they will get it by spying. Intelligence agencies are given collection priorities by the White House that shapes their strategies and expenditures. Presumably someone in the White House or in a Cabinet agency, interested in Brazil, put it on a list of collection priorities for the intelligence community. Or they may have wanted to know if Brazils president intended to continue her predecessors policy of rapprochement with Iran. Neither subject justifies espionage against Brazil since such information was obtainable in other ways. In most cases involving economic intelligence, the private sector, which has powerful incentives to obtain economic information and verify its accuracy, is a better source than espionage and cheaper as well. If you ask an intelligence agency a question, it uses espionage to get the answer even if a better answer is available from other sources. Espionage can confirm publicly available data or commitments,20 but provides little added value. That said, intelligence agencies are loath to give up the economic portfolio for bureaucratic reasons and their policymaker clients often lack the connections to the financial community that could provide alternative sources of information. U.S. espionage was not driven by commercial motives. This differentiates it from the espionage activities of most of the countries that spy on the United States. The logic of Snowdens leaks is not one sided. If they have revealed a range of activities, what they have not revealed is equally important. Snowden has not produced any evidence of commercial espionage by the United States. Nor has be produced any evidence of the NSA programs being used for political advantage, to suppress debate or punish opponents. Enquiries by the European Union in relation to charges about the alleged Echelon signals intelligence system found that the United States did not illicitly acquire business information or technology.21 The United States does collect information on foreign companies in two specific areas: bribery and nonproliferation. This information is not shared with U.S. companies but with the involved governments as part of a diplomatic exchange to prevent such activities. There is some discussion of whether the United States needs to change its policy on illicitly acquiring technology or business information, but there is currently no serious effort to change existing law to allow this. Snowdens revelations provided an opportunity to give expression to a not-so-latent anti-Americanism in Europe and Latin America. Contrary to the assertions of Brazils president at the UN General Assembly, U.S. actions were not a violation of the customary practice among states, which routinely spy on each other. Brazil itself engages in espionage, along with widespread surveillance of its own population with almost no oversight. Nor were U.S. actions illegal under international law. There is no international treaty on espionage, nor is it prohibited. International law studiously ignores espionage that, by definition, is an agent of one country, operating legally under that countrys laws, violating the laws of another country. Countries may dislike being spied upon, but it is not illegal. The missing aspect of the Snowden discussion is that mass surveillance programs conducted by NSA in Europe were done in cooperation with European security services. Because European nations lack adequate oversight mechanisms for intelligence, the signal greatest inaccuracy in the Snowden debate is the failure to recognize that mass surveillance programs were done in cooperation with European partners to prevent terrorist incidents, part of a larger program of cooperation on counterterrorism. Information on these cooperative relationships is not easily found in public sources and it is not in the interests of those handling the Snowden material to highlight this cooperative aspect. European governments cooperated with NSA because it provided global capabilities that significantly improved their counterterrorism capabilities. It is perfectly understandable that no European government was willing to stand up and say thisthere is a strong tradition of keeping intelligence activities secret from their own publics in Europe, and there would have been considerable political risk to any elected official. Still, a blend of hypocrisy, anti-Americanism, and ignorance is not a strong foundation for the transatlantic relationship. NSA mass surveillance was done with the participation of European governments, and the United States need not apologize for the failure of European oversight. The United States underestimated risk before 9/11 and relied on a cumbersome, legalistic, and fragmented defense. Creating new hurdles or restoring old ones increases the chance of a successful attack. If the probability of a terrorist attack is declining (or if the United States overestimates the probability of attack), curtailing some collection programs will not increase risk. If risk remains the same or is increasing, curtailing an effective intelligence program will increase risk. 9/11 was unprecedented in its audacity and scope, but circumstances that allowed it to occur have changed in ways that make a repetition unlikely. Conventional attacks using firearms and explosives are attempted with such frequency, however, that we cannot say that risk has returned to pre-9/11 levels

Nuclear terrorism causes extinctionHellman 8 (Martin E. Hellman, emeritus prof of engineering @ Stanford, Risk Analysis of Nuclear Deterrence SPRING 2008 THE BENT OF TAU BETA PI, http://www.nuclearrisk.org/paper.pdf)The threat of nuclear terrorism looms much larger in the publics mind than the threat of a full-scale nuclear war, yet this article focuses primarily on the latter. An explanation is therefore in order before proceeding. A terrorist attack involving a nuclear weapon would be a catastrophe of immense proportions: A 10-kiloton bomb detonated at Grand Central Station on a typical work day would likely kill some half a million people, and inflict over a trillion dollars in direct economic damage. America and its way of life would be changed forever. [Bunn 2003, pages viii-ix]. The likelihood of such an attack is also significant. Former Secretary of Defense William Perry has estimated the chance of a nuclear terrorist incident within the next decade to be roughly 50 percent [Bunn 2007, page 15]. David Albright, a former weapons inspector in Iraq, estimates those odds at less than one percent, but notes, We would never accept a situation where the chance of a major nuclear accident like Chernobyl would be anywhere near 1% .... A nuclear terrorism attack is a low-probability event, but we cant live in a world where its anything but extremely low-probability. [Hegland 2005]. In a survey of 85 national security experts, Senator Richard Lugar found a median estimate of 20 percent for the probability of an attack involving a nuclear explosion occurring somewhere in the world in the next 10 years, with 79 percent of the respondents believing it more likely to be carried out by terrorists than by a government [Lugar 2005, pp. 14-15]. I support increased efforts to reduce the threat of nuclear terrorism, but that is not inconsistent with the approach of this article. Because terrorism is one of the potential trigger mechanisms for a full-scale nuclear war, the risk analyses proposed herein will include estimating the risk of nuclear terrorism as one component of the overall risk. If that risk, the overall risk, or both are found to be unacceptable, then the proposed remedies would be directed to reduce which- ever risk(s) warrant attention. Similar remarks apply to a number of other threats (e.g., nuclear war between the U.S. and China over Taiwan). his article would be incomplete if it only dealt with the threat of nuclear terrorism and neglected the threat of full- scale nuclear war. If both risks are unacceptable, an effort to reduce only the terrorist component would leave humanity in great peril. In fact, societys almost total neglect of the threat of full-scale nuclear war makes studying that risk all the more important. The cosT of World War iii The danger associated with nuclear deterrence depends on both the cost of a failure and the failure rate.3 This section explores the cost of a failure of nuclear deterrence, and the next section is concerned with the failure rate. While other definitions are possible, this article defines a failure of deterrence to mean a full-scale exchange of all nuclear weapons available to the U.S. and Russia, an event that will be termed World War III. Approximately 20 million people died as a result of the first World War. World War IIs fatalities were double or triple that numberchaos prevented a more precise deter- mination. In both cases humanity recovered, and the world today bears few scars that attest to the horror of those two wars. Many people therefore implicitly believe that a third World War would be horrible but survivable, an extrapola- tion of the effects of the first two global wars. In that view, World War III, while horrible, is something that humanity may just have to face and from which it will then have to recover. In contrast, some of those most qualified to assess the situation hold a very different view. In a 1961 speech to a joint session of the Philippine Con- gress, General Douglas MacArthur, stated, Global war has become a Frankenstein to destroy both sides. If you lose, you are annihilated. If you win, you stand only to lose. No longer does it possess even the chance of the winner of a duel. It contains now only the germs of double suicide. Former Secretary of Defense Robert McNamara ex- pressed a similar view: If deterrence fails and conflict develops, the present U.S. and NATO strategy carries with it a high risk that Western civilization will be destroyed [McNamara 1986, page 6]. More recently, George Shultz, William Perry, Henry Kissinger, and Sam Nunn4 echoed those concerns when they quoted President Reagans belief that nuclear weapons were totally irrational, totally inhu- mane, good for nothing but killing, possibly destructive of life on earth and civilization. [Shultz 2007] Official studies, while couched in less emotional terms, still convey the horrendous toll that World War III would exact: The resulting deaths would be far beyond any precedent. Executive branch calculations show a range of U.S. deaths from 35 to 77 percent (i.e., 79-160 million dead) a change in targeting could kill somewhere between 20 million and 30 million additional people on each side .... These calculations reflect only deaths during the first 30 days. Additional millions would be injured, and many would eventually die from lack of adequate medical care millions of people might starve or freeze during the follow- ing winter, but it is not possible to estimate how many. further millions might eventually die of latent radiation effects. [OTA 1979, page 8] This OTA report also noted the possibility of serious ecological damage [OTA 1979, page 9], a concern that as- sumed a new potentiality when the TTAPS report [TTAPS 1983] proposed that the ash and dust from so many nearly simultaneous nuclear explosions and their resultant fire- storms could usher in a nuclear winter that might erase homo sapiens from the face of the earth, much as many scientists now believe the K-T Extinction that wiped out the dinosaurs resulted from an impact winter caused by ash and dust from a large asteroid or comet striking Earth. The TTAPS report produced a heated debate, and there is still no scientific consensus on whether a nuclear winter would follow a full-scale nuclear war. Recent work [Robock 2007, Toon 2007] suggests that even a limited nuclear exchange or one between newer nuclear-weapon states, such as India and Pakistan, could have devastating long-lasting climatic consequences due to the large volumes of smoke that would be generated by fires in modern megacities. While it is uncertain how destructive World War III would be, prudence dictates that we apply the same engi- neering conservatism that saved the Golden Gate Bridge from collapsing on its 50th anniversary and assume that preventing World War III is a necessitynot an option. CaseCyber SecurityThe first paragraph of the Fritz evidence explains the multiple tiers of security and monitoring that watches over the nuclear system multiple people must first decide that the order is real making a fake order unlikely meaning there is low risk of the impact occurring like they say. Also the Fritz evidence claims The choice of operating system, apparently based on Windows XP, is not as alarming as the advertising of such a system is meaning the windows operating systems on submarines are irrelevant. No cyberterror empirics, hollow threats, no capabilities, no motive, disincentives, and focus on other strategiesHealey 11 Director of the Cyber Statecraft Initiative at the Atlantic Council of the United States (Jason, Cyberterror is Aspirational Blather, http://www.acus.org/new_atlanticist/cyberterror-aspirational-blather10/3/11)The Atlantic Councils Cyber Statecraft Initiative recently hosted a conference call to discuss the terrorist use of the Internet and how it has evolved in the ten years since 9/11. The call featured Matt Devost of FusionX, Neal Pollard of PriceWaterhouseCooper and Rick Howard of VeriSign/iDefense who have been tracking this and many other online threats for years. While this conversation was off the record, this blog attempts to capture the spirit. Terms such as cyber 9/11 and cyber terrorism have been used frequently to describe the security threats posed by terrorists online. Cyber technologies, like any other, enable terrorist groups to do their terrorizing more effectively and efficiently. In the past few years it is increasingly common for them to use the Internet for propaganda, fundraising, general support, and convergence. Videos and anonymous discussion forums allow for the dissemination of training information and the call to arms for more individuals to participate and join groups. Importantly, the panelists agreed that these groups have not yet used cyber attack capabilities in any significant way to cause casualties or actually terrorize anyone. While Ibrahim Samudra or Irahabi 007 hacked to raise funds through credit-card fraud, this is a traditional support activity, not cyber terror. The US government was a relatively early advocate of a strict definition of cyber terrorism, as nearly a decade ago they were calling it as a criminal act perpetrated through computers resulting in violence, death and/or destruction, and creating terror for the purpose of coercing a government to change its policies. Not defacing a webpage, not flooding a website (even of the South Korean president) and not stealing credit card information. Some terrorists groups may talk about waging an e-Jihad, but such talk remains, for now, aspirational blather. For decades, the rule of thumb for intelligence analysts has been that adversaries with motives for damaging cyber attacks do not have the capabilities, while those with the capabilities do not yet have the motives. A large-scale cyber attacks is more difficult than is generally believed and few adversaries have both the motive and capability. Additionally, terrorist groups have many disincentives for pursing cyber capabilities. For example, their leadership tends to be conservative and they tend to stick with what they know will work suicide bombers, road-side bombs, and kinetic assaults. These actually kill and terrorize people which, as yet, no cyber attack has accomplished. The Congressional Research Service summed this up as lower risk, but less drama.Plan is not sufficient to be able to solve the degradation, their Cate card specifies SIGNIT and other causes within the NSA that cause the vulnerabilities, because the NSA still exists, alternate causes will still make cyber security vulnerable regardless.Executive Orders lack the exposure and durability to solve.Schlanger 15 Margo, Henry M. Butzel Professor of Law, University of Michigan. Intelligence Legalism and the National Security Agency's Civil Liberties Gap, Harvard National Security Journal. 6:112Efforts to implement most of the Church Committee's substantive recommendations as statutory law failed; they entered American law instead as part ofExecutive Order12,333.As already quoted, the Executive Order does expressly state (in language unchanged from its 1981 promulgation): "Set forth below are certain general principles that, in addition to and consistent with applicable laws, are intended to achieve the proper balance between the acquisition of essential information and protection of individual interests."n301That is, one of 12,333's purposes is to fill the civil liberties gap left by constitutional and statutory law.But 12,333 cannot live up to that goal. For one thing, the rules' status as part of an executive order renders them both less visible and more easily weakened. The 2008 amendments to 12,333, for example, for the first time allowed inter-agency sharing ofsignals intelligence"for purposes of allowing the recipient agency to determine whether the information is[*181]relevant to its responsibilities and can be retained by it," pursuant to potential "procedures established by the Director in coordination with the Secretary of Defense and approved by the Attorney General."n302This change received no attention by non-governmental commentators.n303More important, even ifExecutive Order12,333adequately covered civil liberties interests in 1980, it--along with its associated AG Guidelines--has grown out-of-date in subsequent decades. Unsurprisingly, given the generally low visibility of intelligence matters, there was little appetite to update eitherExecutive Order12,333or other sources of executive self-regulation to address new challenges to liberty, until the Snowden disclosures. Thus notwithstanding the enormous changes that have taken place in the scope of surveillance since 1980 and the advent of "big data" methods, there have been no substantive liberty-protective changes ever made to the Executive Order. Some procedural protections have been added,n304and notable efforts toweakenthe protection of U.S. Person information were fended off.n305But whatever further substantive protection might be useful in light of technological or other changes, all that has been added since 1980 is new hortatory language swearing fealty to (already binding) other laws: "The United States Government has a solemn obligation, and shall continue in the conduct of intelligence activities under this order, to protect fully the legal rights of all United States persons, including freedoms, civil liberties, and privacy rights guaranteed by Federal law."n306Other groups that use the exploits produced by the NSA still develop their own exploits, the NSA is not a magical exploit generator. This means even in the world of the plan their impacts can still occur.The risk of a cyber attack is EXTREMELY over-hyped: by the government to get bills passed, by companies to get funding, and by the mediaThomas Rid, a Reader in War Studies at Kings college Longon and a non-resident fellow at the Center for Transatlantic Relations in the School for Advanced International Studies Johns Hopkins University and worked at the Woodrow Wilson Center, 3/13/13, The Great Cyberscare http://www.foreignpolicy.com/articles/2013/03/13/the_great_cyberscareThe White House likes a bit of threat. In his State of the Union address, Barack Obama wanted to nudge Congress yet again into passing meaningful legislation. The president emphasized that America's enemies are "seeking the ability to sabotage our power grid, our financial institutions, and our air traffic control systems." After two failed attempts to pass a cybersecurity act in the past two years, he added swiftly: "We cannot look back years from now and wonder why we did nothing in the face of real threats to our security and our economy." Fair enough. A bit of threat to prompt needed action is one thing. Fear-mongering is something else: counterproductive. Yet too many a participant in the cybersecurity debate reckons that puffery pays off. The Pentagon, no doubt, is the master of razzmatazz. Leon Panetta set the tone by warning again and again of an impending "cyber Pearl Harbor." Just before he left the Pentagon, the Defense Science Board delivered a remarkable report, Resilient Military Systems and the Advanced Cyber Threat. The paper seemed obsessed with making yet more drastic historical comparisons: "The cyber threat is serious," the task force wrote, "with potential consequences similar to the nuclear threat of the Cold War." The manifestations of an all-out nuclear war would be different from cyberattack, the Pentagon scientists helpfully acknowledged. But then they added, gravely, that "in the end, the existential impact on the United States is the same." A reminder is in order: The world has yet to witness a single casualty, let alone fatality, as a result of a computer attack. Such statements are a plain insult to survivors of Hiroshima. Some sections of the Pentagon document offer such eye-wateringly shoddy analysis that they would not have passed as an MA dissertation in a self-respecting political science department. But in the current debate it seemed to make sense. After all a bit of fear helps to claim -- or keep -- scarce resources when austerity and cutting seems out-of-control. The report recommended allocating the stout sum of $2.5 billion for its top two priorities alone, protecting nuclear weapons against cyberattacks and determining the mix of weapons necessary to punish all-out cyber-aggressors. Then there are private computer security companies. Such firms, naturally, are keen to pocket some of the government's money earmarked for cybersecurity. And hype is the means to that end. Mandiant's much-noted report linking a coordinated and coherent campaign of espionage attacks dubbed Advanced Persistent Threat 1, or "APT1," to a unit of the Chinese military is a case in point: The firm offered far more details on attributing attacks to the Chinese than the intelligence community has ever done, and the company should be commended for making the report public. But instead of using cocky and over-confident language, Mandiant's analysts should have used Words of Estimative Probability, as professional intelligence analysts would have done. An example is the report's conclusion, which describes APT1's work: "Although they control systems in dozens of countries, their attacks originate from four large networks in Shanghai -- two of which are allocated directly to the Pudong New Area," the report found. Unit 61398 of the People's Liberation Army is also in Pudong. Therefore, Mandiant's computer security specialists concluded, the two were identical: "Given the mission, resourcing, and location of PLA Unit 61398, we conclude that PLA Unit 61398 is APT1." But the report conspicuously does not mention that Pudong is not a small neighborhood ("right outside of Unit 61398's gates") but in fact a vast city landscape twice the size of Chicago. Mandiant's report was useful and many attacks indeed originate in China. But the company should have been more careful in its overall assessment of the available evidence, as the computer security expert Jeffrey Carr and others have pointed out. The firm made it too easy for Beijing to dismiss the report. My class in cybersecurity at King's College London started poking holes into the report after 15 minutes of red-teaming it -- the New York Times didn't. Which leads to the next point: The media want to sell copy through threat inflation. "In Cyberspace, New Cold War," the headline writers at the Times intoned in late February. "The U.S. is not ready for a cyberwar," shrieked the Washington Post earlier this week. Instead of calling out the above-mentioned Pentagon report, the paper actually published two supportive articles on it and pointed out that a major offensive cyber capability now seemed essential "in a world awash in cyber-espionage, theft and disruption." The Post should have reminded its readers that the only military-style cyberattack that has actually created physical damage -- Stuxnet -- was actually executed by the United States government. The Times, likewise, should have asked tough questions and pointed to some of the evidential problems in the Mandiant report; instead, it published what appeared like an elegant press release for the firm. On issues of cybersecurity, the nation's fiercest watchdogs too often look like hand-tame puppies eager to lap up stories from private firms as well as anonymous sources in the security establishment.And, the plan cant solve for cybersecurity because the solvency only applies to corporations in America, the USFG forcing Microsoft to make a patch in only America, even though it is a global software further makes other countries feel like they shouldnt trust America because America is playing favorites, this means that the 1AC cannot solve for cybersecurity.Internet FreedomNo impact to internet nationalizationGordon M. Goldstein, 6/25/2014. Served as a member of the American delegation to the World Conference on International Telecommunications. The End of the Internet? The Atlantic, http://m.theatlantic.com/magazine/archive/2014/07/the-end-of-the-internet/372301/.Some experts anticipate a future with a Brazilian Internet, a European Internet, an Iranian Internet, an Egyptian Internetall with different content regulations and trade rules, and perhaps with contrasting standards and operational protocols. Eli Noam, a professor of economics and finance at Columbia Business School, believes that such a progressive fracturing of the global Internet is inevitable. We must get used to the idea that the standardised internet is the past but not the future, he wrote last fall. And that the future is a federated internet, not a uniform one. Noam thinks that can be managed, in part through the development of new intermediary technologies that would essentially allow the different Internets to talk to each other, and allow users to navigate the different legal and regulatory environments.

Internet isnt key to innovationEli M. Noam, 8/28/2002. Professor and Finance and Economics, Director, Columbia Institute for Tele-Information, Graduate School of Business, Columbia University. Will the Internet Be Bad for Democracy? Financial Times Online, http://www.citi.columbia.edu/elinoam/articles/int_bad_dem.htm.When the media history of the 20th Century will be written, the Internet will be seen as its major contribution. Television, telephone, and computers will be viewed as its early precursors, merging and converging into the new medium just as radio and film did into TV. The Internets impact on culture, business, and politics will be vast, for sure. Where will it take us? To answer that question is difficult, because the Internet is not simply a set of interconnecting links and protocols connecting packet switched networks, but it is also a construct of imagination, an inkblot test into which everybody projects their desires, fears and fantasies. Some see enlightenment and education. Others see pornography and gambling. Some see sharing and collaboration; others see e-commerce and profits. Controversies abound on most aspects of the Internet. Yet when it comes to its impact on democracy process, the answer seems unanimous.[1] The Internet is good for democracy. It creates digital citizens (Wired 1997) active in the vibrant teledemocracy (Etzioni, 1997) of the Electronic Republic (Grossman 1995) in the Digital Nation (Katz 1992).Is there no other side to this question? Is the answer so positively positive? The reasons why the Internet is supposed to strengthen democracy include the following. 1.The Internet lowers the entry barriers to political participation. 2.It strengthens political dialogue. 3.It creates community. 4.It cannot be controlled by government. 5.It increases voting participation. 6.It permits closer communication with officials. 7.It spreads democracy world-wide. Each of the propositions in this utopian populist, view, which might be called is questionable. But they are firmly held by the Internet founder generation, by the industry that now operates the medium, by academics from Negroponte (1995) to Dahl (1989), by gushy news media, and by a cross-party set of politicians who wish to claim the future, from Gore to Gingrich, from Bangemann to Blair. I will argue, in contrast, that the Internet, far from helping democracy, is a threat to it. And I am taking this view as an enthusiast, not a critic. But precisely because the Internet is powerful and revolutionary, it also affects, and even destroys, all traditional institutions--including--democracy. To deny this potential is to invite a backlash when the ignored problems eventually emerge.[2] My perspective is different from the neo-Marxist arguments about big business controlling everything; from neo-Luddite views that low-tech is beautiful; and from reformist fears that a politically disenfranchised digital underclass will emerge. The latter, in particular, has been a frequent perspective. Yet, the good news is that the present income-based gap in Internet usage will decline in developed societies. Processing and transmission becomes cheap, and will be anywhere, affordably. Transmission will be cheap, and connect us to anywhere, affordably. And basic equipment will almost be given away in return for long-term contracts and advertising exposure. That is why what we now call basic Internet connectivity will not be a problem. Internet connectivity will be near 100% of the households and offices, like electricity, because the Internet will have been liberated from the terror of the PC as its gateway, the most consumer-unfriendly consumer product ever built since the unicycle. Already, more than half of communications traffic is data rather than voice, which means that it involves fast machines rather than slow people.These machines will be everywhere.Cars will be chatting with highways.Suitcases will complain to airlines.Electronic books will download from publishers.Front doors will check in with police departments.Pacemakers will talk to hospitals.Television sets will connect to video servers. For that reason, my skepticism about the Internet as a pro-democracy force is not based on its uneven distribution. It is more systemic. The problem is that most analysts commit a so-called error of composition. That is, they confuse micro behavior with macro results. They think that if something is helpful to an individual, it is also helpful to society at large, when everybody uses it. Suppose we would have asked, a century ago, whether the automobile would reduce pollution. The answer would have been easy and positive: no horses, no waste on the roads, no smell, no use of agricultural land to grow oats. But we now recognize that in the aggregate, mass motorization has been bad for the environment. It created emissions, dispersed the population, and put more demand on land. The second error is that of inference. Just because the Internet is good for democracy in places like North Korea, Iran, or Sudan does not mean that it is better for Germany, Denmark, or the United States. Just because three TV channels offer more diversity of information than one does not mean that 30,000 are better than 300. So here are several reasons why the Internet will not be good for democracy, corresponding to the pro-democracy arguments described above. The Internet Will Make Politics More Expensive and Raise Entry Barriers The hope has been that online public space will be an electronic version of a New England or Swiss town meeting, open and ongoing. The Internet would permit easy and cheap political participation and political campaigns. But is that true? Easy entry exists indeed for an Internet based on narrowband transmission, which is largely text-based. But the emerging broadband Internet will permit fancy video and multimedia messages and information resources. Inevitably, audience expectations will rise. When everyone can speak, who will be listened to? If the history of mass media means anything, it will not be everyone. It cannot be everyone. Nor will the wisest or those with the most compelling case or cause be heard, but the best produced, the slickest, and the best promoted. And that is expensive. Secondly, because of the increasing glut and clutter of information, those with messages will have to devise strategies to draw attention. Political attention, just like commercial one, will have to be created. Ideology, self-interest, and public spirit are some factors. But in many cases, attention needs to be bought, by providing entertainment, gifts, games, lotteries, coupons, etc, That, too, is expensive. The basic cost of information is rarely the problem in politics; its the packaging. It is not difficult or expensive to produce and distribute handbills or to make phone calls, or to speak at public events. But it is costly to communicate to vast audiences in an effective way, because that requires large advertising and PR budgets. Thirdly, effective politics on the Internet will require elaborate and costly data collection. The reason is that Internet media operate differently from traditional mass media. They will not broadcast to all but instead to specifically targeted individuals.Instead of the broad stroke of political TV messages, netcasted politics will be customized to be most effective.This requires extensive information about individuals interests and preferences. Data banks then become a key to political effectiveness. Who would own and operate them? In some cases the political parties. But they could not maintain control over the data banks where a primary exist that is open to many candidates. There is also a privacy problem, when semi-official political parties store information about the views, fears, and habits of millions of individuals. For both of those reasons the ability of parties to collect such data will be limited. Other political data banks will be operated by advocacy and interest groups. They would then donate to candidates data instead of money. The importance of such data banks would further weaken campaign finance laws and further strengthen interest group pluralism over traditional political parties. But in particular, political data banks will maintained through what is now known as political consultants. They will establish permanent and proprietary permanent data banks and become still bigger players in the political environment and operate increasingly as ideology-free for profit consultancies. Even if the use of the Internet makes some political activity cheaper, it does so for everyone, which means that all organization will increase their activities rather than spend less on them.[3] If some aspects of campaigning become cheaper, they would not usually spend less, but instead do more. Thus, any effectiveness of early adopters will soon be matched by their rivals and will simply lead to an accelerated, expensive, and mutually canceling political arms-race of investment in action techniques and new--media marketing technologies. The early users of the Internet experienced a gain in their effectiveness, and now they incorrectly extrapolate this to society at large. While such gain is trumpeted as the empowerment of the individual over Big Government and Big Business, much of it has simply been a relative strengthening of individuals and groups with computer and online skills (who usually have significantly about-average income and education) and a relative weakening of those without such resources. Government did not become more responsive due to online users; it just became more responsive to them. The Internet will make reasoned and informed political dialog more difficult. True, the Internet is a more active and interactive medium than TV. But is its use in politics a promise or a reality? Just because the quantity of information increase does not mean that its quality rises. To the contrary. As the Internet leads to more information clutter, it will become necessary for any message to get louder. Political information becomes distorted, shrill, and simplistic. One of the characteristics of the Internet is disintermediation, the Internet is in business as well as in politics. In politics, it leads to the decline of traditional news media and their screening techniques. The acceleration of the news cycle by necessity leads to less careful checking, while competition leads to more sensationalism. Issues get attention if they are visually arresting and easily understood. This leads to media events, to the 15 min of fame, to the sound bite, to infotainment. The Internet also permits anonymity, which leads to the creation of, and to last minute political ambush. The Internet lends itself to dirty politics more than the more accountable TV. While the self-image of the tolerant digital citizen persists, an empirical study of the content of several political usenet groups found much intolerant behavior: domineering by a few; rude flaming; and reliance on unsupported assertions. (Davis, 1999) Another investigation finds no evidence that computer-mediated communication is necessarily democratic or participatory (Streck, 1998). The Internet disconnects as much as it connects Democracy has historically been based on community. Traditionally, such communities were territorial electoral districts, states, and towns. Community, to communicate the terms are related: community is shaped by the ability of its members to communicate with each other. If the underlying communications system changes, the communities are affected. As one connects in new ways, one also disconnects the old ways. As the Internet links with new and far-away people, it also reduces relations with neighbors and neighborhoods. The long-term impact of cheap and convenient communications is a further geographic dispersal of the population, and thus greater physical isolation. At the same time, the enormous increase in the number of information channels leads to an individualization of mass media, and to fragmentation. Suddenly, critics of the lowest common denominator programming, of TV now get nostalgic for the electronic hearth around which society huddled. They discovered the integrative role of mass media. On the other hand, the Internet also creates electronically linked new types of community. But these are different from traditional communities. They have less of the averaging that characterizes physical communities-throwing together the butcher, the baker, the candlestick maker.Instead, these new communities are more stratified along some common dimension, such as business, politics, or hobbies. These groups will therefore tend to be issue - driven, more narrow, more narrow-minded, and sometimes more extreme, as like-minded people reinforce each others views. Furthermore, many of these communities will be owned by someone.They are like a shopping mall, a gated community, with private rights to expel, to promote, and to censor.The creation of community has been perhaps the main assets of Internet portals such as AOL.It is unlikely that they will dilute the value of these assets by relinquishing control. If it is easy to join such virtual communities, it also becomes easy to leave, in a civic sense, ones physical community.Community becomes a browning experience. Information does not necessarily weaken the state. Can Internet reduce totalitarianism? Of course. Tyranny and mind control becomes harder. But Internet romantics tend to underestimate the ability of governments to control the Internet, to restrict it, and to indeed use it as an instrument of surveillance.How quickly we forget.Only a few years ago, the image of information technology was Big Brother and mind control. That was extreme, of course, but the surveillance potential clearly exists. Cookies can monitor usage.Wireless applications create locational fixes.Identification requirements permit the creation of composites of peoples public and private activities and interests.Newsgroups can (and are) monitored by those with stakes in an issue. A free access to information is helpful to democracy.But the value of information to democracy tends to get overblown. It may be a necessary condition, but not a sufficient one. Civil war situations are not typically based on a lack of information. Yet there is an undying belief that if people only knew, eg. by logging online, they would become more tolerant of each other. That is wishful and optimistic hope, but is it based on history? Hitler came to power in a republic where political information and communication were plentiful. Democracy requires stability, and stability requires a bit of inertia. The most stable democracies are characterized by a certain slowness of change. Examples are Switzerland and England. The US operates on the basis of a 210-year old Constitution. Hence the acceleration of politics made the Internet is a two-edged sword. The Internet and its tools accelerate information flows, no question about it. But same tools are also available to any other group, party, and coalition. Their equilibrium does not change, except temporarily in favor of early adopters. All it may accomplish in the aggregate is a more hectic rather than a more thoughtful process. Electronic voting does not strengthen democracy The Internet enables electronic voting and hence may increase voter turnout.But it also changes democracy from a representative model to one of direct democracy. Direct democracy puts a premium on resources of mobilization, favoring money and organization.It disintermediates elected representatives.It favors sensationalized issues over boring ones.Almost by definition, it limits the ability to make unpopular decisions.It makes harder the building of political coalition (Noam, 1980, 1981).The arguments against direct democracy were made perhaps most eloquently in the classic arguments for the adoption of the US Constitution, by James Madison in the Federalist Papers #10. Electronic voting is not simply the same as traditional voting without the inconvenience of waiting in line.When voting becomes like channel clicking on remote, it is left with little of the civic engagement of voting.When voting becomes indistinguishable from a poll, polling and voting merge.With the greater ease and anonymity of voting, a market for votes is unavoidable.Participation declines if people know the expected result too early, or where the legitimacy of the entire election is in question. Direct access to public officials will be phony In 1997, Wired magazine and Merrill Lynch commissioned a study of the political attitudes of thedigital connected.The results showed them more participatory, more patriotic, more pro-diversity, and more voting-active.They were religious (56% say they pray daily); pro-death penalty (3/4); pro-Marijuana legalization (71%); pro-market (%) and pro-democracy (57%).But are they outliers or the pioneers of a new model?At the time of the survey (1997) the digitally connected counted for 9% of the population; they were better educated, richer (82% owned securities); whites; younger; and more Republican than the population as a whole.In the Wired/Merrill Lynch survey, none of the demographic variables were corrected for.Other studies do so, and reach far less enthusiastic results. One study of the political engagement of Internet users finds that they are only slightly less likely to vote, and are more likely to contact elected officials.The Internet is thus a substitute for such contacts, not their generator.Furthermore, only weak causality is found.(Bimber 1998) Another survey finds that Internet users access political information roughly in the same proportions as users of other media, about 5% of their overall information usage (Pew, 1998).Another study finds that users of the Internet for political purposes tend to already involved.Thus, the Internet reinforces political activity rather than mobilizes new one (Norris, Pippa, 1999) Yes, anybody can fire off email messages to public officials and perhaps even get a reply, and this provides an illusion of access.But the limited resource will still be scarce: the attention of those officials. By necessity, only a few messages will get through.Replies are canned, like answering machines.If anything, the greater flood of messages will make gatekeepers more important than ever: power brokers that can provide access to the official. As demand increases while the supply is static, the price of access goes up, as does the commission to the middle-man. This does not help the democratic process. Indeed, public opinion can be manufactured.Email campaigns can substitute technology and organization for people.Instead of grass roots one can create what has been described as Astroturf,. i.e. manufactured expression of public opinion. Ironically, the most effective means of communication (outside of a bank check) becomes the lowest in tech: the handwritten letter (Blau, 1988) If, in the words of a famous cartoon, on the Internet nobody knows that you are a dog, then everyone is likely to be treated as one. The Internet facilitates the International Manipulation of Domestic Politics. Cross-border interference in national politics becomes easier with the Internet.Why negotiate with the US ambassador if one can target a key Congressional chairman by an e-mail campaign,chat group interventions, and misinformation, and intraceable donations.People have started to worry about computer attacks by terrorists.They should worry more about state-sponsored interferences into other countries electronic politics. Indeed, it is increasingly difficult to conduct national politics and policies in a globalized world, where distance and borders are less important than in the past, even if one does not share the hyperbole of the evaporation of the Nation State (Negroponte 1995).The difficulty of societies to control their own affairs leads, inevitably, to backlash and regulatory intervention. Conclusion: It is easy to romanticize the past of democracy as Athenian debates in front of an involved citizenry, and to believe that its return by electronic means is neigh. A quick look to in the rear-view mirror, to radio and then TV, is sobering. Here, too, the then new media were heralded as harbingers of a new and improved political dialogue. But the reality of those media has been is one of cacophony, fragmentation, increasing cost, and declining value of hard information. The Internet makes it easier to gather and assemble information, to deliberate and to express oneself, and to organize and coordinate action.(Blau, 1998). It would be simplistic to deny that the Internet can mobilize hard-to-reach groups, and that it has unleashed much energy and creativity. Obviously there will be some shining success stories. But it would be equally nave to cling to the image of the early Internet - - nonprofit, cooperative, and free - - and ignore that it is becoming a commercial medium, like commercial broadcasting that replaced amateur ham radio. Large segments of society are disenchanted with a political system is that often unresponsive, frequently affected by campaign contributions, and always slow. To remedy such flaws, various solutions have been offered and embraced. To some it is to return to spirituality. For others it is to reduce the role of government and hence the scope of the democratic process. And to others, it is the hope for technical solution like the Internet. Yet, it would only lead to disappointment if the Internet would be sold as the snake oil cure for all kinds of social problems. It simply cannot simply sustain such an expectation. Indeed if anything, the Internet will lead to less stability, more fragmentation, less ability to fashion consensus, more interest group pluralism. High capacity computers connected to high-speed networks are no remedies for flaws in a political system. There is no quick fix. There is no silver bullet. There is no free lunch.Even if you dont buy that innovations tapped out and cant solve existential threats alone Tainter & Patzek 12 - Joseph A. Tainter, Professor, Department of Environment and Society, Utah State University; and Tadeusz W. Patzek, Professor, Department of Petroleum and Geosystems Engineering, The University of Texas at Austin, Drilling Down: The Gulf Oil Debacle and Our Energy Dilemma, p. 48We are often assured that innovation in the future will reduce our societys dependence on energy and other resources while providing a lifestyle such as we now enjoy. We discuss this point further in Chaps. 5 and 9. Here we observe that rates of innovation appear to change in a manner similar to the Hubbert cycle of resource production. This finding has important implications for the future productivity and complexity of our society. Energy flows, technology development, population growth, and individual creativity can be combined into an overall Innovation Index which is the number of patents granted each year by the U.S. Patent Office per one million inhabitants of the United States of America. This specific patent rate has the units of the number of patents per year per one million people. Figure 3.16 is a decomposition of this patent rate into multiple Hubbert-like cycles between 1790 and 2009. Interestingly, the fundamental Hubbert cycle of the U.S. patent rate peaked in 1914, the year in which World War I broke out. The second major rate peak was in 1971, coinciding with the peak of U.S. oil production. The last and tallest peak of productivity occurred in 2004. Note that without a new cycle of inventions in something, the current cycles will expire by 2050. In other words, the productivity of U.S. innovation will decline dramatically in the next 2030 years, with some of this decline possibly being forced by a steady decline of support for fundamental research and development. Energy and Complexity Each new complex addition to the already overwhelmingly complex social and scientific structures in the United States is less and less relevant, while costing additional resources and aggravation. Most of this complexity is apparent to the naked eye: look at the global banking and trading system, the healthcare system, the computer operating systems and software, military operations, or government structures. The scope of the problem is also obvious in the production pains of Boeings 787 Dreamliner, and in the drilling of the BP Macondo well.Internet not key to economyCharles Kenny, 6/17/2013. senior fellow at the Center for Global Development. Think the Internet Leads to Growth? Think Again, Bloomberg Business Week, http://www.businessweek.com/articles/2013-06-17/think-the-internet-leads-to-growth-think-again.Remember the year 2000 in the months after the Y2K bug had been crushed, when all appeared smooth sailing in the global economy? When the miracle of finding information online was so novel that The Onion ran an article, Area Man Consults Internet Whenever Possible? It was a time of confident predictions of an ongoing economic and political renaissance powered by information technology. Jack Welchthen the lauded chief executive officer of General Electric (GE)had suggested the Internet was the single most important event in the U.S. economy since the Industrial Revolution. The Group of Eight highly industrialized nationsat that point still relevantmet in Okinawa in 2000 and declared, IT is fast becoming a vital engine of growth for the world economy. Enormous opportunities are there to be seized by us all. In a 2000 report, then-President Bill Clintons Council of Economic Advisers suggested (PDF), Many economists now posit that we are entering a new, digital economy that could inaugurate an unprecedented period of sustainable, rapid growth. It hasnt quite worked out that way. Indeed, if the last 10 years have demonstrated anything, its that for all the impact of a technology like the Internet, thinking that any new innovation will set us on a course of high growth is almost certainly wrong. Thats in part because many of the studies purporting to show a relationship between the Internet and economic growth relied on shoddy data and dubious assumptions. In 1999 the Federal Reserve Bank of Cleveland released a study that concluded (PDF), the fraction of a countrys population that has access to the Internet is, at least, correlated with factors that help to explain average growth performance. It did so by demonstrating a positive relationship between the number of Internet users in a country in 1999 with gross domestic product growth from 1974 to 1992. Usually we expect the thing being caused (growth in the 1980s) to happen after the things causing it (1999 Internet users). In defense of the Fed, researchers at the World Bank recently tried to repeat the same trick. They estimated that a 10 percent increase in broadband penetration in a country was associated with a 1.4 percentage point increase in growth rate. This was based on growth rates and broadband penetration from 1980 to 2006. Given that most deployment of broadband occurred well after the turn of the millennium, the only plausible interpretation of the results is that countries that grew faster from 1980 to 2006 could afford more rapid rollouts of broadband. Yet the study is widely cited by broadband boosters. Many are in denial about the failure of the IT revolution to spark considerable growth. Innovation in information technology has hardly dried up since 2000. YouTube(GOOG) was founded in 2005, and Facebook (FB) is only a year older. Customer-relations manager Salesforce.com (CRM), the first cloud-based solution for business, only just predates the turn of the millennium. And there are now 130 million smartphones in the U.S., each with about the same computing power as a 2005 vintage desktop. Meanwhile, according to the U.S. Department of Commerce (PDF), retail e-commerce as a percentage of total retail sales has continued to climbe-commerce sales were more than 6 percent of the total by the fourth quarter of 2012, up from less than 2 percent in 2003. Yet despite continuing IT innovation, weve seen few signs that predictions of an unprecedented period of sustainable, rapid growth are coming true. U.S. GDP expansion in the 1990s was a little faster than the 1980sit climbed from an annual average of 3 percent to 3.2 percent. But GDP growth collapsed to 1.7 percent from 2000 to 2009. Northwestern University economist Robert Gordon notes that U.S. labor productivity growth spiked brieflyrising from 1.38 percent from 1972 to 1996 to 2.46 percent from 1996 to 2004but fell to 1.33 percent from 2004 and 2012. Part of the labor productivity spike around the turn of the century was because of the rapidly increasing efficiency of IT production (you get a lot more computer for the same cost nowadays). Another part was because of considerable investments in computers and networks across the economywhat economists call capital deepening. But even during the boom years it was near-impossible to see an economywide impact of IT on total factor productivityor the amount of output we were getting for a given input of capital and labor combined. Within the U.S., investment in the uses of the Internet for business applications had an impact on wage and employment growth in only 6 percent of countiesthose that already had high incomes, large populations, high skills, and concentrated IT use before 1995, according to a recent analysis (PDF) by Chris Forman and colleagues in the American Economic Review. Investments in computers and software did yield a return for most companiesbut the return wasnt anything special. So what happened to the promised Internet miracle? While the technology has had a dramatic impact on our lives, it hasnt had a huge impact on traditional economic measures. Perhaps that shouldnt come as a surprise. To understand why, think about television in the 1970s. Broadcast to the home for free, and all we had to pay for was the set and the electricity to run it. Despite that small expenditure, we spent hours a day watching TV. Fast-forward four decades: 2