Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

heidibrayer hasn't commented recently.

Submissions

top

Designing an Insider Threat Program

heidibrayer heidibrayer writes  |  yesterday

heidibrayer (2976759) writes "Insider threat is the threat to organization’s critical assets posed by trusted individuals — including employees, contractors, and business partners — authorized to use the organization’s information technology systems. Insider threat programs within an organization help to manage the risks due to these threats through specific prevention, detection, and response practices and technologies. The National Industrial Security Program Operating Manual (NISPOM), which provides baseline standards for the protection of classified information, is considering proposed changes that would require contractors that engage with federal agencies, which process or access classified information, to establish insider threat programs. The proposed changes to the NISPOM were preceded by Executive Order 13587, Structural Reforms to Improve the Security of Classified Networks and the Responsible Sharing and Safeguarding of Classified Information. Signed by President Obama in September 2011, Executive Order 13587 requires federal agencies that operate or access classified computer networks to implement insider threat detection and prevention programs.

Since the passage of Executive Order 13587, the following key resources have been developed:

-The National Insider Threat Task Force developed minimum standards for implementing insider threat programs. These standards include a set of questions to help organizations conduct insider threat self-assessments.

-The Intelligence and National Security Alliance conducted research to determine the capabilities of existing insider threat programs

-The Intelligence Community Analyst-Private Sector Partnership Program developed a roadmap for insider threat programs.

CERT’s insider threat program training and certificate programs are based on the above resources as well as CERT’s own Insider Threat Workshop, common sense guidelines for mitigating insider threats, and in-depth experience and insights from helping organizations establish computer security incident response teams. As described in this blog post, researchers from the Insider Threat Center at the Carnegie Mellon University Software Engineering Institute are also developing an approach based on organizational patterns to help agencies and contractors systematically improve the capability of insider threat programs to protect against and mitigate attacks."

Link to Original Source
top

7 Categories of Agile Metrics

heidibrayer heidibrayer writes  |  about a week ago

heidibrayer (2976759) writes "More and more, suppliers of software-reliant systems are moving away from traditional waterfall development practices in favor of agile methods. As described in previous posts on this blog, agile methods are effective for shortening delivery cycles and managing costs. If the benefits of agile are to be realized effectively, however, personnel responsible for overseeing software acquisitions must be fluent in metrics used to monitor these programs. This blog post highlights the results of an effort by researchers at the Carnegie Mellon University Software Engineering Institute to create a reference for personnel who oversee software development acquisition for major systems built by developers applying agile methods. This post also presents seven categories for tracking agile metrics."
Link to Original Source
top

Eliciting & Analyzing Unstated Requirements in Sociotechnical Ecocystems

heidibrayer heidibrayer writes  |  about two weeks ago

heidibrayer (2976759) writes "As recent news attests, the rise of sociotechnical ecosystems (STE)—which, we define as a software system that engages a large and geographically-distributed community in a shared pursuit—allows us to work in a mind space and a data space that extends beyond anything that we could have imagined 20 or 30 years ago. STEs present opportunities for tackling problems that could not have even been approached previously because the needed experts and data are spread across multiple locations and distance. Since STEs can be complex and have many diverse stakeholders, a key challenge faced by those responsible for establishing and sustaining them is eliciting requirements to inform their development efforts. Yet stakeholders often have requirements that they are not aware of, so they do not specify them. Uncovering these unstated requirements can be hard and is not well-supported by traditional approaches to requirements elicitation. This blog post describes initial results of an effort by researchers at the Carnegie Mellon University Software Engineering Institute aimed at developing an approach for determining the unstated needs of stakeholders typical of large, diverse programs and especially STEs."
Link to Original Source
top

Evolutionary Improvements of Quality Attributes: Performance in Practice

heidibrayer heidibrayer writes  |  about three weeks ago

heidibrayer (2976759) writes "Continuous delivery practices, popularized in Jez Humble’s 2010 book Continuous Delivery, enable rapid and reliable software system deployment by emphasizing the need for automated testing and building, as well as closer cooperation between developers and delivery teams. As part of the Carnegie Mellon University Software Engineering Institute's (SEI) focus on Agile software development, we have been researching ways to incorporate quality attributes into the short iterations common to Agile development. We know from existing SEI work on Attribute-Driven Design, Quality Attribute Workshops, and the Architecture Tradeoff Analysis Method that a focus on quality attributes prevents costly rework. Such a long-term perspective, however, can be hard to maintain in a high-tempo, Agile delivery model, which is why the SEI continues to recommend an architecture-centric engineering approach, regardless of the software methodology chosen. As part of our work in value-driven incremental delivery, we conducted exploratory interviews with teams in these high-tempo environments to characterize how they managed architectural quality attribute requirements (QARs). These requirements—such as performance, security, and availability—have a profound impact on system architecture and design, yet are often hard to divide, or slice, into the iteration-sized user stories common to iterative and incremental development. This difficulty typically exists because some attributes, such as performance, touch multiple parts of the system. This blog post summarizes the results of our research on slicing (refining) performance in two production software systems. We also examined the ratcheting (periodic increase of a specific response measure) of scenario components to allocate QAR work."
Link to Original Source
top

An Appraisal of Systems Engineering: Defense v. Non-Defense

heidibrayer heidibrayer writes  |  about a month ago

heidibrayer (2976759) writes "In today’s systems it’s very hard to know where systems end and software begins. Software performs an integrating function in many systems, often serving as the glue interconnecting other system elements. We also find that many of the problems in software systems have their roots in systems engineering, which is an interdisciplinary field that focuses on how to design and manage complex systems over their life cycles. For that reason, staff at the Carnegie Mellon University Software Engineering Institute (SEI) often conduct research in the systems engineering realm. Process frameworks, architecture development and evaluation methods, and metrics developed for software are routinely adapted and applied to systems. Better systems engineering supports better software development, and both support better acquisition project performance. This blog post, the latest in a series on this research, analyzes project performance based on systems engineering activities in the defense and non-defense industries."
Link to Original Source
top

Research to Create Automated Buffer Overflow Protection

heidibrayer heidibrayer writes  |  about a month ago

heidibrayer (2976759) writes "According to a 2013 report examining 25 years of vulnerabilities (from 1998 to 2012), buffer overflow causes 14 percent of software security vulnerabilities and 35 percent of critical vulnerabilities, making it the leading cause of software security vulnerabilities overall. As of July 2014, the TIOBE index indicates that the C programming language, which is the language most commonly associated with buffer overflows, is the most popular language with 17.1 percent of the market. Embedded systems, network stacks, networked applications, and high-performance computing rely heavily upon C. Embedded systems can be especially vulnerable to buffer overflows because many of them lack hardware memory management units. This blog post describes my research on the Secure Coding Initiative in the CERT Division of the Carnegie Mellon University Software Engineering Institute to create automated buffer overflow prevention."
Link to Original Source
top

A Taxonomy for Managing Operational Cybersecurity Risk

heidibrayer heidibrayer writes  |  about 2 months ago

heidibrayer (2976759) writes "Organizations are continually fending off cyberattacks in one form or another. The 2014 Verizon Data Breach Investigations Report, which included contributions from SEI researchers, tagged 2013 as "the year of the retailer breach." According to the report, 2013 also witnessed “a transition from geopolitical attacks to large-scale attacks on payment card systems.” To illustrate the trend, the report outlines a 12-month chronology of attacks, including a January “watering hole” attack on the Council on Foreign Relations website followed in February by targeted cyber-espionage attacks against The New York Times and The Wall Street Journal. The well-documented Target breach brought 2013 to a close with the theft of more than 40 million debit and credit card numbers. This blog post highlights a recent research effort to create a taxonomy that provides organizations a common language and set of terminology they can use to discuss, document, and mitigate operational cyber security risks."
Link to Original Source
top

Case Study: The Changing Role of Software and Systems in Satellites

heidibrayer heidibrayer writes  |  about 2 months ago

heidibrayer (2976759) writes "The role of software within systems has fundamentally changed over the past 50 years. Software’s role has changed both on mission-critical DoD systems, such as fighter aircraft and surveillance equipment, and on commercial products, such as telephones and cars. Software has become not only the brain of most systems, but the backbone of their functionality. Acquisition processes must acknowledge this new reality and adapt. This blog posting, the second in a series about the relationship of software engineering (SwE) and systems engineering (SysE), shows how software technologies have come to dominate what formerly were hardware-based systems. This posting describes a case study: the story of software on satellites, whose lessons can be applied to many other kinds of software-reliant systems."
Link to Original Source
top

HTML5 for Mobile Software Applications at the Edge

heidibrayer heidibrayer writes  |  about 2 months ago

heidibrayer (2976759) writes "Many warfighters and first responders operate at what we call “the tactical edge,” where users are constrained by limited communication connectivity, storage availability, processing power, and battery life. In these environments, onboard sensors are used to capture data on behalf of mobile applications to perform tasks such as face recognition, speech recognition, natural language translation, and situational awareness. These applications then rely on network interfaces to send the data to nearby servers or the cloud if local processing resources are inadequate. While software developers have traditionally used native mobile technologies to develop these applications, the approach has some drawbacks, such as limited portability. In contrast, HTML5 has been touted for its portability across mobile device platforms, as well an ability to access functionality without having to download and install applications. This blog post describes research aimed at evaluating the feasibility of using HTML5 to develop applications that can meet tactical edge requirements."
Link to Original Source
top

Four Principles for Engineering Scalable, Big Data Systems

heidibrayer heidibrayer writes  |  about 3 months ago

heidibrayer (2976759) writes "In earlier posts on big data, I have written about how long-held design approaches for software systems simply don’t work as we build larger, scalable big data systems. Examples of design factors that must be addressed for success at scale include the need to handle the ever-present failures that occur at scale, assure the necessary levels of availability and responsiveness, and devise optimizations that drive down costs. Of course, the required application functionality and engineering constraints, such as schedule and budgets, directly impact the manner in which these factors manifest themselves in any specific big data system. In this post, the latest in my ongoing series on big data, I step back from specifics and describe four general principles that hold for any scalable, big data system. These principles can help architects continually validate major design decisions across development iterations, and hence provide a guide through the complex collection of design trade-offs all big data systems require."
Link to Original Source
top

Android, Heartbleed, Testing, and DevOps: An SEI Blog Mid-Year Review

heidibrayer heidibrayer writes  |  about 3 months ago

heidibrayer (2976759) writes "In the first half of this year, the SEI blog has experienced unprecedented growth, with visitors in record numbers learning more about our work in secure coding for Android, malware analysis, Heartbleed, and V Models for Testing. In the first six months of 2014 (through June 20), the SEI blog has logged 60,240 visits, which is nearly comparable with the entire 2013 yearly total of 66,757 visits. As we reach the mid-year point, this blog posting takes a look back at our most popular areas of work (at least according to you, our readers) and highlights our most popular blog posts for the first half of 2014, as well as links to additional related resources that readers might find of interest."
Link to Original Source
top

Software Architecture Analysis Using AADL: A Real-World Perspective

heidibrayer heidibrayer writes  |  about 3 months ago

heidibrayer (2976759) writes "Introducing new software languages, tools, and methods in industrial and production environments incurs a number of challenges. Among other necessary changes, practices must be updated, and engineers must learn new methods and tools. These updates incur additional costs, so transitioning to a new technology must be carefully evaluated and discussed. Also, the impact and associated costs for introducing a new technology vary significantly by type of project, team size, engineers’ backgrounds, and other factors, so that it is hard to estimate the real acquisition costs. A previous post in our ongoing series on the Architecture Analysis and Design Language (AADL) described the use of AADL in research projects (such as System Architectural Virtual Integration (SAVI)) in which experienced researchers explored the language capabilities to capture and analyze safety-critical systems from different perspectives. These successful projects have demonstrated the accuracy of AADL as a modeling notation. This blog post presents research conducted independently of the SEI that aims to evaluate the safety concerns of several unmanned aerial vehicle (UAV) systems using AADL and the SEI safety analysis tools implemented in OSATE."
Link to Original Source
top

Establishing Trust in Wireless Emergency Alerts

heidibrayer heidibrayer writes  |  about 4 months ago

heidibrayer (2976759) writes "The Wireless Emergency Alerts (WEA) service went online in April 2012, giving emergency management agencies such as the National Weather Service or a city’s hazardous materials team a way to send messages to mobile phone users located in a geographic area in the event of an emergency. Since the launch of the WEA service, the newest addition to the Federal Emergency Management Agency (FEMA) Integrated Public Alert and Warning System (IPAWS),“trust” has emerged as a key issue for all involved. Alert originators at emergency management agencies must trust WEA to deliver alerts to the public in an accurate and timely manner. The public must also trust the WEA service before it will act on the alerts. Managing trust in WEA is a responsibility shared among many stakeholders who are engaged with WEA. This blog post, the first in a series, highlights recent research aimed at enhancing both the trust of alert originators in the WEA service and the public’s trust in the alerts it receives."
Link to Original Source
top

What Systems Should Exist in an Automated DevOps Environment?

heidibrayer heidibrayer writes  |  about 4 months ago

heidibrayer (2976759) writes "To maintain a competitive edge, software organizations should be early adopters of innovation. To achieve this edge, organizations from Flickr and IBM to small tech startups are increasingly adopting an environment of deep collaboration between development and operations (DevOps) teams and technologies, which historically have been two disjointed groups responsible for information technology development. “The value of DevOps can be illustrated as an innovation and delivery lifecycle, with a continuous feedback loop to learn and respond to customer needs,” Ashok Reddy writes in the technical white paper, DevOps: The IBM approach. Beyond innovation and delivery, DevOps provides a means for automating repetitive tasks within the software development lifecycle (SDLC), such as software builds, testing, and deployments, allowing them to occur more naturally and frequently throughout the SDLC. This blog post, the second in our series, presents a generalized model for automated DevOps and describes the significant potential advantages for a modern software development team."
Link to Original Source
top

Needed: Improved and Increased Collaboration Between Software and Systems Engine

heidibrayer heidibrayer writes  |  about 4 months ago

heidibrayer (2976759) writes "The Government Accountability Office (GAO) recently reported that acquisition program costs typically run 26 percent over budget, with development costs exceeding initial estimates by 40 percent. Moreover, many programs fail to deliver capabilities when promised, experiencing a 21-month delay on average. The report attributes the “optimistic assumptions about system requirements, technology, and design maturity [that] play a large part in these failures” to a lack of disciplined systems engineering analysis early in the program. What acquisition managers do not always realize is the importance of focusing on software engineering during the early systems engineering effort. Improving on this collaboration is difficult partly because both disciplines appear in a variety of roles and practices. This post, the first in a series, addresses the interaction between systems and software engineering by identifying the similarities and differences between the two disciplines and describing the benefits both could realize through a more collaborative approach."
Link to Original Source
top

Heartbleed: Q&A

heidibrayer heidibrayer writes  |  about 4 months ago

heidibrayer (2976759) writes "The Heartbleed bug, a serious vulnerability in the Open SSL crytopgrahic software library, enables attackers to steal information that, under normal conditions, is protected by the Secure Socket Layer/Transport Layer Security (SSL/TLS) encryption used to secure the internet. Heartbleed and its aftermath left many questions in its wake:

- Would the vulnerability have been detected by static analysis tools?
- If the vulnerability has been in the wild for two years, why did it take so long to bring this to public knowledge now?
  — Who is ultimately responsible for open-source code reviews and testing?
- Is there anything we can do to work around Heartbleed to provide security for banking and email web browser applications?

In late April 2014, researchers from the Carnegie Mellon University Software Engineering Institute and Codenomicon, one of the cybersecurity organizations that discovered the Heartbleed vulnerability, participated in a panel to discuss Heartbleed and strategies for preventing future vulnerabilities. During the panel discussion, we did not have enough time to address all of the questions from our audience, so we transcribed the questions and panel members wrote responses. This blog posting presents questions asked by audience members during the Heartbleed webinar and the answers developed by our researchers."

Link to Original Source
top

Secure Coding Guidelines to Prevent Vulnerabilities Like Heartbleed

heidibrayer heidibrayer writes  |  about 5 months ago

heidibrayer (2976759) writes "Software developers produce more than 100 billion lines of code for commercial systems each year. Even with automated testing tools, errors still occur at a rate of one error for every 10,000 lines of code. While many coding standards address code style issues (i.e., style guides), CERT secure coding standards focus on identifying unsafe, unreliable, and insecure coding practices, such as those that resulted in the Heartbleed vulnerability. For more than 10 years, the CERT Secure Coding Initiative at the Carnegie Mellon University Software Engineering Institute has been working to develop guidance—most recently, The CERT C Secure Coding Standard: Second Edition—for developers and programmers through the development of coding standards by security researchers, language experts, and software developers using a wiki-based community process. This blog post explores the importance of a well-documented and enforceable coding standard in helping programmers circumvent pitfalls and avoid vulnerabilities."
Link to Original Source
top

Secure Coding to Prevent Vulnerabilities Like Heartbleed

heidibrayer heidibrayer writes  |  about 5 months ago

heidibrayer (2976759) writes "Software developers produce more than 100 billion lines of code for commercial systems each year. Even with automated testing tools, errors still occur at a rate of one error for every 10,000 lines of code. While many coding standards address code style issues (i.e., style guides), CERT secure coding standards focus on identifying unsafe, unreliable, and insecure coding practices, such as those that resulted in the Heartbleed vulnerability. For more than 10 years, the CERT Secure Coding Initiative at the Carnegie Mellon University Software Engineering Institutehas been working to develop guidance—most recently, The CERT C Secure Coding Standard: Second Edition—for developers and programmers through the development of coding standards by security researchers, language experts, and software developers using a wiki-based community process. This blog post explores the importance of a well-documented and enforceable coding standard in helping programmers circumvent pitfalls and avoid vulnerabilities."
Link to Original Source

Journals

heidibrayer has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?