Search This Blog

Wednesday 30 October 2013

A Study - Why There's No Cure for Common Cold

A Study - Why There's No Cure for Common Cold

In a pair of landmark studies that exploit the genetic sequencing of the “missing link” cold virus, rhinovirus C, scientists at the University of Wisconsin.Madison have constructed a three-dimensional model of the pathogen that shows why there is no cure yet for the common cold.
Writing today (Oct. 28, 2013) in the journal Virology, a team led by UW-Madison biochemistry Professor Ann Palmenberg provides a meticulous topographical model of the capsid or protein shell of a cold virus that until 2006 was unknown to science.
Rhinovirus C is believed to be responsible for up to half of all childhood colds, and is a serious complicating factor for respiratory conditions such as asthma. Together with rhinoviruses A and B, the recently discovered virus is responsible for millions of illnesses yearly at an estimated annual cost of more than $40 billion in the United States alone.
The work is important because it sculpts a highly detailed structural model of the virus, showing that the protein shell of the virus is distinct from those of other strains of cold viruses.
“The question we sought to answer was how is it different and what can we do about it? We found it is indeed quite different,” says Palmenberg, noting that the new structure “explains most of the previous failures of drug trials against rhinovirus.”
The A and B families of cold virus, including their three-dimensional structures, have long been known to science as they can easily be grown and studied in the lab. Rhinovirus C, on the other hand, resists culturing and escaped notice entirely until 2006 when “gene chips” and advanced gene sequencing revealed the virus had long been lurking in human cells alongside the more observable A and B virus strains.
The new cold virus model was built “in silico,” drawing on advanced bioinformatics and the genetic sequences of 500 rhinovirus C genomes, which provided the three-dimensional coordinates of the viral capsid.
“It’s a very high-resolution model,” notes Palmenberg, whose group along with a team from the University of Maryland was the first to map the genomes for all known common cold virus strains in 2009. “We can see that it fits the data.”
With a structure in hand, the likelihood that drugs can be designed to effectively thwart colds may be in the offing. Drugs that work well against the A and B strains of cold virus have been developed and advanced to clinical trials. However, their efficacy was blunted because they were built to take advantage of the surface features of the better known strains, whose structures were resolved years ago through X-ray crystallography, a well-established technique for obtaining the structures of critical molecules.
Because all three cold virus strains all contribute to the common cold, drug candidates failed as the surface features that permit rhinovirus C to dock with host cells and evade the immune system were unknown and different from those of rhinovirus A and B.
Based on the new structure, “we predict you’ll have to make a C-specific drug,” explains Holly A. Basta, the lead author of the study and a graduate student working with Palmenberg in the UW-Madison Institute for Molecular Virology. “All the [existing] drugs we tested did not work.”
Antiviral drugs work by attaching to and modifying surface features of the virus. To be effective, a drug, like the right piece of a jigsaw puzzle, must fit and lock into the virus. The lack of a three-dimensional structure for rhinovirus C meant that the pharmaceutical companies designing cold-thwarting drugs were flying blind.
“It has a different receptor and a different receptor-binding platform,” Palmenberg explains. “Because it’s different, we have to go after it in a different way.”

Tuesday 29 October 2013

Top 10 Enterprise Resource Planning (ERP) Vendors



Top 10 Enterprise Resource Planning (ERP) Vendors
Er. Isha Nagpal
Assistant professor,DCSE, PPIMT,Hisar
 


CONTENTS
·         The Top Ten ERP Vendors
·         Understanding the ERP Market Space
·         ERP Moving into Cloud Computing
·         Choosing between On-premise and Cloud Based Solutions
·         ERP Vendors by Sector
·         ERP Vendors in the Manufacturing and Distribution Industry
·         Transport, Communication, Energy, Sanitary Services
·         Selecting an ERP Solution
·         A Few Final Thoughts

 The Top Ten ERP Vendors

The global ERP market has been seeing an average of about 10% growth year on year since 2006, and while the current global economic slowdown is bound to cause a dip in this growth pattern, it is a safe bet that the overall trend will sustain. Growth in ERP markets will almost certainly go back to 10% plus as soon as the Euro zone crisis subsides.

Understanding the ERP Market Space

Like every other segment of the IT industry, the ERP industry is evolving rapidly. The industry has clearly differentiated between very large enterprises and the small and medium business sectors. It is the second segment that is seeing rapid growth and the emergence of new players in the ERP business.
ERP vendors are classified as Tier I, II or III depending on the kinds of clients they service. The three groups are very distinct and the size and complexity of their solutions are also very distinct.
In general, the Industry classifies a Tier I ERP vendor as one that sells extensively to the Tier I market – a market that has companies with annual revenues exceeding $1 billion. These companies are invariably multinationals with a presence in many different geographic regions. Naturally enough, Tier I ERP products have a high cost of ownership due to their complexity and costs of implementation and support. While there have been several Tier I vendors earlier, mergers and consolidations have shrunk the list considerably. The list of Tier I ERP vendors is now very small and consists of just two entries – SAP and Oracle.
Tier II vendors sell ERP products that suite mid-sized companies that have revenues in the range of $50 million to about $1 billion. The products of Tier II vendors are specifically built to handle this market and cater to a single or multiple locations of deployment. Naturally, Tier II solutions are easier to manage and support and cost correspondingly less as well. Often, Tier II solutions are confined to a specific industry vertical. This group sees considerable competition and is comprised of about 20 well-known companies.
Tier III ERP solution providers target companies that have revenues of $10 million to $50 million. Solutions provided by these companies are simple to implement and support and have correspondingly lower cost of ownership. Many ERPs in this group are single location installations and built for a single vertical. While they are easy to manage and deploy, the risk is that a company could soon outgrow the solution and hence some kind of migration path must be kept in mind when a small but rapidly growing company selects a Tier III solution.
ERP Moving into Cloud Computing
Another interesting development in the ERP space is the advent of cloud computing solutions. A number of users are beginning to use cloud based ERP solutions. These solutions typically provide a lower cost of ownership – initial startup costs can be lower by as much as 30% to 50% as compared to an ERP solution hosted within your own premises. This becomes even more relevant for companies with a large geographical expanse. Typically, the first movers to the cloud were the relatively smaller companies and mid-range companies were the next to consider a move to the clouds. Large companies were the most conservative in this regard.
Since ERP implementations can be very complex, smaller companies are able to experiment more easily with cloud solutions. The very large companies have all made very significant investments in their existing ERP systems and hence may not be so keen to change.
Nevertheless, it is clear that the Software as a Service (SaaS) model will influence the ERP industry considerably in the future. Generally, SaaS is associated with lower costs due to following a rental model for using software and due to a ‘pay as you go’ approach. It is not clear how this will apply to ERP solutions, but the move to the cloud is indisputable. There are several concerns about the cloud, but these are also being addressed as the technology matures and finds widespread use – some of these are:
 Will the SaaS model imply a standardized solution?
 What happens to any earlier ERP investment?
 Risks of governance, security and vendor lock in.

In spite of the relevance of these concerns, it is clear that more cloud based ERP solutions will emerge. Very recently, Larry Ellison announced that Oracle would now be embracing cloud technology. Although he did not mention cloud based ERP solutions, he did throw his weight behind the technology.
Implementation sizes of cloud based ERP solutions will increase slowly and this is a space that must be watched carefully.

Choosing between On-premise and Cloud Based Solutions

A large number of case studies are available about both cloud based and on-premise deployment of ERP solutions. From a study of these, the key reasons why either is selected can be easily discovered.
Companies that chose on-premise hosting often do so largely because of the following reasons:
 Leveraging existing systems – companies that had substantial investment already in on-premise hardware or software wished to leverage that to reduce overall costs.
 Ensure connectivity with legacy systems – in some cases legacy systems were critical to the business and the company wanted to ensure that these stayed connected to the final solution. This was easier with on-premise installations.
 More predictable performance – many companies are still uncomfortable with the cloud and prefer the security of on-premise installations to ensure that the systems stay under their control.
 Compliance issues – at sometimes there are compliance issues such as those mandated by HIPAA etc that are easier to meet if the solution is hosted on-premise.

On the other hand, companies that selected cloud based solutions favored the following reasons to do so:
 Lower initial cost – with a pay as you go model, initial costs are a fraction of the on-premise model. This makes ERP affordable for many companies that would otherwise not be able to consider such solutions.
 Rapid deployment – since no installation is done on your premises, the roll out is much faster. All you have to do is to ensure that PCs are connected to the Internet.
 Very little IT staff is needed to manage the ERP.
 Upgrades are managed by the service provider and are transparent to you. In a traditional on-premise hosting, an upgrade sometimes causes reversal of customizations which can be a major loss of capability. This even prompts some companies not to upgrade.
 Scaling up is easy and involves very little additional expenditure. Your costs increase as your capability to pay does.

ERP Vendors by Sector

ERP solutions are such a specialized field and the necessity of domain expertise is so critical that solutions and their providers can be easily broken down by sector. Each sector has its own top 10 list. Of course, many of the players are common to all domains – SAP, Oracle and Microsoft
being the main examples. But variations tend to creep into the Tier II and Tier III end of the market.
This study looks at the following major sectors of Industry:
 Manufacturing & distribution industry
 Transport, communication, energy, sanitary services
 Service sector
 Retail sector

Details of the top players in each sector are tabulated below along with their market shares. In some cases, where the market is extremely fragmented at the lower end, it is difficult to identify the last few of the top 10 and a grouping called ‘other’ captures the rest of the vendor list.

ERP Vendors in the Manufacturing and Distribution Industry

This market is dominated by SAP, Oracle and Microsoft in that order and together they command a 55% market share. A number of Tier II vendors also have considerable market share. The lower end of the market is very fragmented with 26% going to a large number of vendors each of whom has less than 1% market share. Here is what the list looks like - (All figures are in percentages – in this and subsequent tables).
Transport, Communication, Energy, Sanitary Services

In this sector, the top 3 remain unchanged but as the table below will show; their market share is increased by nearly 20%. As a result, SAP, Oracle and Microsoft cover nearly 73% of the entire market in this domain. The remaining vendors share about 11% between themselves and a much smaller proportion goes to the ‘others’ group. The table of relative standings in this section is as shown below.
Service Sector

The trend of the big three maintaining their dominance continues unchanged. The lower end of the market is more fragmented and only seven companies that have a market share of 1% or more can be identified. The position of the major players in this space is shown in the table below.
Retail Sector

In the retail sector the dominance of SAP, Oracle and Microsoft continues. Microsoft improves its position in this sector, coming out even with Oracle. In Tier III service providers, we find new entrants in the list with small vendors taking up nearly 11% of the market space. The table below shows the relative positions in this domain.
With the above background, a final list of the top ten players in the ERP segment is drawn up. This list takes into account the market share of each player in various market segments. Where market shares add to the same value, a vendor who has presence in larger numbers of domains is ranked higher. With this ranking methodology, the final top ten list is produced below.
In the next few paragraphs, we discuss each of these companies briefly.
SAP – Founded in 1972 by five former IBM engineers, SAP is the undisputed market leader in the ERP space and is the third largest software company in the world. Its current version has more than 30,000 relational database tables that allow it to handle extremely complex business situations. While it is an undisputed number one in the Tier I ERP space, SAP has been criticized at times for being too complex and difficult to handle. If you are a small or medium company, this solution is probably more than what your company needs or could potentially handle.

Oracle – While Oracle was formerly best known for its relational database, it was for many years the database of choice for SAP ERP applications. This cooperative situation had existed since the late 70’s. However, sometime around 2004, Oracle began to look at building its own ERP solutions and at the same time SAP began to offer its ERP solutions on the Microsoft SQL Server database platform as well. The first Oracle ERP product was Oracle Financials which was released into the market as early as in 1989. However, post 2004, Oracle began to become a serious player in the ERP market and is now a well-established number 2 in the Tier I market.

Microsoft – Microsoft Dynamics is mostly focused on Tier II clients in the ERP space. It provides solutions in a number of different business domains including in the Customer Relationship Management domain. A great advantage of Microsoft products is its great ease of use. This holds for its ERP products as well.

Infor – Infor Global Solutions is a privately held company that has grown rapidly in the Tier II vendor space since 2002. The company has taken an aggressive acquisition route to growth and continues to follow this path even now with its acquisition of ENXSUITE in 2011. Infor has a global presence to match the footprint of the top 3 and has clients in 194 countries. Infor has solutions in as many as in 14 different domains and it has a very good presence in each of the four specific domains that were previously discussed.

Epicor – Started in 1984 and working initially with DOS, Epicor later converted its products to Windows and followed a merger and acquisition path to acquire companies selling ERP products and then to offer their solutions as a comprehensive package. Epicor has a presence in over 150 countries and has more than 20,000 Tier II / III customers. Epicore likes to call its ERP “the key to possibilities not yet imagined”.

Lawson – Acquired by Infor a couple of years ago, Lawson still maintains a separate identity although it does display the Infor logo on its web site. Specifically mentioning that it is tailored for the small to midsized business, Lawson has a presence in 68 countries and has more than 4,500 installations. Lawson caters to a large number of verticals and uses this as its USP. Simplicity of the solution is another key focus area in a market best known for its complexity.

QAD – The QAD website shows a chain with the logo of the cloud forming one of the links so we have an idea what is on the company’s mind. The QAD Enterprise Application is designed to make it easy for first time ERP users to begin using an ERP in their company with the least amount of migration problems. The company supports and engages with its customers to ensure that the return on investment is obtained rapidly.
Sage – is a UK based company and had its beginnings in a 1981 summer job when the first version of a type of accounting software was written. This grew into larger versions until eventually, in 1984, Sage Software was launched as a company and achieved a fair amount of success. Like many other companies in the ERP space, Sage has grown by a number of acquisitions and says that ‘acquisitions are part of its DNA’. The cross pollination of DNA appears to have been very successful given the rate of growth Sage has been seeing.

IFS – Founded in 1983, IFS focuses on building agile ERP solutions that use SOA architecture. This implies easy modification and adaptation to user needs. IFS is most useful four core strategic processes - service & asset management, manufacturing, supply chain and project management. It has a user base in excess of 2,000 installations and customers in 50 countries. One key reason for its success is its sharp focus on specific verticals.

Consona Corp – Deriving its name from ‘consonance with the customer’, Consona is active in ERP, CRM, knowledge management and other related fields. The company is privately held and has grown by acquiring a number of specialist ERP companies. If you are doing business in a niche area where Consona has a focus, you may just be lucky. No one else we know is offering an ERP solution tailored to printed circuit board manufacturers or to metal wire and cable manufactures. A solution as focused as this is bound to be better than a generic ERP when put to use in one of those industries.

Selecting an ERP Solution
Selecting an ERP solution is a serious exercise and has to be executed with great care. Companies often go ahead with poorly or incompletely defined requirements and do not take adequate care in selection of a vendor. It is essential that the selection process encompass the following:
 A structured approach to defining requirements and creation of the tender document – all departments and stakeholders must contribute to the requirements definition and be aware of the solution selection process. At the end of this process, you should be able to define with great clarity what the final solution will be able to help your company accomplish.

Realistic and comprehensive demonstrations – Typical vendor demonstrations tend to be simple and straightforward. You need to see demonstrations that apply to your specific situation and not to your industry in general. You will have to work out in advance what part of the activity you want to see demonstrated and how much of sample data are you willing to provide prospective vendors. Needless to say, all your shortlisted vendors must provide the same type of demonstration with identical data.
An Objective selection – the selection process must be clearly defined with well selected marking criteria. All stakeholders must be given an opportunity to rank solutions and make a decision matrix. Ensure due weight is given to the following criteria in creation of a marking matrix .

  Customizability – check carefully how flexible the solution is and what adding new functionality entails. Determine how much you can customize yourself without needing to ask for support.
o Technical Fit – the solution must fit the technology you are already using – for example if you are solidly on the Windows platform and use SQL Server as the database, you could opt for an ERP solution built on .Net and using the same database. This will simplify your manpower issues and make the solution easier to manage.
o Calculate the total cost of ownership – many costs are not apparent in a vendor proposal – these could include upgrades to hardware, additional manpower, network costs, costs of software maintenance and customization and so on. Spend time and effort to unearth these and calculate as accurate a TCO as possible.
o Do not restrict your selection list to the top three or four. There are more than 70 vendors in the market and many of the smaller ones offer very specialized niche ERP solutions. Some of them could fit your business very closely.
o Look to handle the unexpected – if you want to process a refund based on a photograph of a damaged carton the customer emails you, can the system handle the image and absorb it into the workflow?

Easy to use reporting tools and generation of ad-hoc reports – there are ERP solutions that are extremely formal about building reports, and this forces your users to rely on your IT staff. Other solutions allow you a degree of freedom to create any ad-hoc report you need. Ease of report generation is an important criterion too.
o Interface with vendor and client systems – electronic data interchange with collaborators and clients can often be essential. Ensure that your solution provides this functionality without needing any additional third party translation tools.
o Security should be built in from role based security at the individual level all the way up to the division and the business level.

A Few Final Thoughts
Anyone who has worked on ERP solutions knows firsthand how difficult it can be to get everything right and derive real benefits from the initiative. That of course is subject matter broad enough to generate another white paper on. But when you are in charge of the implementation, should you use a big name or a small niche player?
The answer, we think, lies in how specialized your business is. If you can fit into a generic solution with a small amount of customization, then maybe a standard solution will work for you. However, if you are in a very specialized business, why should you look for a generic solution in place of something that is closely tailored to your needs? Such a solution ensures that the software is built to follow your workflow rather than the workflow being adapted to the software.
ERP solutions can be terribly difficult to implement and cause a considerable disruption at work. A closely tailored solution will cause the minimum disruption and assure you of the greatest chance of success. The vendors discussed in this paper are all experienced (the youngest in the ERP business appears to be Oracle!) and selecting one of them that fits your niche seems to be a sensible and practical approach.


Wednesday 23 October 2013

Is Poor Email Management Putting Your Organization At Risk?


Is Poor Email Management Putting Your Organization At Risk?
Er. Isha Nagpal
Assistant professor,DCSE, PPIMT,Hisar
 


CONTENTS
Abstract
The Balancing Act of Preservation
The Consequences of Not Preserving
Storing of email is complicated by its Form
The Cost of Compliance
The Benefits of an Information Management Solution


ABSTRACT
Organizations are driven by email, whether they are private companies or operating within the public sector. While regulations are often specific to various industries and operating sectors, the need to retain as well as produce email is universal. This paper will look at the risks entailed by improper email management, and how organizations can mitigate their risk.
An e-mail drives the business.All organizations, whether private companies or entities in the public sector. Employees communicate via internal email, often in preference to the telephone, and inquiries and support concerns are increasingly handled via email.

This creates a complicated balancing act for organizations, as they are bound by a variety of overlapping laws, acts, and regulations which both force them to preserve email and to produce them to outside parties, virtually on demand.

Systems which simply backup email stores cannot handle the tasks of demonstrating that proper preservation was followed, that laws were not violated, and that the entity can provide information when the laws require it to do so. This is where an email archiving or information management system becomes vital.

The Balancing Act of Preservation

There is a tangled web of overlapping regulations which all speak to preservation of email. These include:
1.      Human Rights Act
2.       Data Protection Act
3.      Regulation of Investigatory Powers Act.

Under each of these Acts, the provision for retention varies, while each typically looks to entities to preserve the original email yet adhere to the Data Protection Act’s mandate to retain “only as long as there is a business reason for that retention.”

For example, within the highly-regulated financial service industry, FSA mandates that all business emails be retained for six years, and certain ones indefinitely. HR records are typically retained for some period of time following the departure or dismissal of an individual, but some records are mandated to be kept for three or six years (Statutory Pay Regulations require three-year retention whilst the Taxes Management Act mandates six years).

Even unsuccessful job applicants’ applications and interview notes must be kept for at least six months per the Disability Discrimination Act. Therefore, the requirement to store emails is a lengthy one and affects virtually every communication in an organization.

The Consequences of Not Preserving

Each of the various Acts and regulations to which retention of data is a part has different consequences for organizations who do not comply. Public entities find themselves worried about the Freedom of Information Act, which includes a penalty scheme for non-compliance, The Data Protection Act, which affects virtually every organization in some manner, is more severe. In fact, the Liverpool City Council pled guilty to a criminal charge in 2006 for failing to comply with a subject access request and was levied a fine in lieu of more serious punishment, the first such organization punished by the Information Compliance Office.

In addition to fines, which can range from relatively trivial to substantial, there is the consequence of loss of confidence. When this happens to a commercial concern, the ramifications are often reduced turnover and other negative business consequences. When this occurs to a public entity, the situation is a bit different. Because such entities are not competitive i.e., there aren’t two competing council authorities for Liverpool City – the consequences of loss of confidence may include staff changes, either by vote or fiat, and even reduced funding.

Finally there is the issue of discovery. HR complaints are only one aspect wherein organizations may be subject to legal proceedings. Liability lawsuits can be much more significant in one recent case, a high-profile utilities authority was sued on a quality of service matter, specifically nuisance. They had extensive stores of emails and were unprepared for the extent of complex discovery which this case entailed. The resultant legal preparation and defense required expensive specialized software, an army of solicitors, and costs that ran into the millions of pounds. Even though the authority prevailed on larger damage issues, the expense of defending themselves remains a significant and unanticipated cost.

Storing of email is complicated by its form

None of the acts or regulations describes what constitutes “storage,” only that emails need to be stored and available for recall during the specified time period. In reality email can exist – and as thus, stored - in three different forms. The first of these is “live,” specifically within the user’s inbox; the second is locally-stored email (aka PST files); and the third is archived email, the preferred method for long-term storage.

Of these three, the second form is the most problematic. Local email storage arose from attempts to place quotas on mailboxes to control storage costs and IT maintenance issues, and within certain programs as a way to create backup images of users’ Outlook data. The notion has since gained wide success but brings its own set of challenges. One of them is that locally-stored email is outside of the purview of the IT organization. Simply put, they have no visibility to what has been stored in these files.

A second is that these files tend to be unstable over time, and corruption means they are no longer accessible by the user, requiring additional IT cycles to try to recover them. And a final challenge is that the size of such files in terms of how many emails are contained within is not documented. A PST frequently contains tens of thousands of emails, even though it looks like a single file name.

The key to effectively storing email is the use of an information management or archiving system that understands all three forms in which email may be encountered. These systems can apply rules-based retention and disposition schemes regardless of the form of the email. They can also eliminate the need for large volumes of locally-stored email by proactively archiving and deleting emails which have passed the required compliance dates.

The cost of Compliance

Organizations that have no solution to the challenge of storing and later producing emails face an increasing risk of monetary fines and other indirect consequences. There are really only two ways to address the problem: one is with increased personnel, and the other is to deploy an information management solution.

Either solution has cost implications, which are amplified by the current recession and shrinking budgets. In terms of pure cost, deploying an information management solution is inherently less expensive than adding personnel: these systems are largely automated, and existing staff can utilize them effectively without additional resources.

An information management solution has additional cost-saving benefits which should be considered when budgeting for such a solution. First, by effectively eliminating trouble-prone locally-stored email, the IT staff will not face the additional burden of help desk support to fix and restore these files. Second, organizations who have some history of using an Exchange-based email solution find that up to 20% of their central storage is consumed with local email storage files that were re-imaged onto central servers for a variety of reasons. The bulk of those files can typically be removed upon successful deployment of an information management solution, deferring anticipated purchases of additional storage. Finally, service requests and discoveries can typically be handled in-house using the information management solution, thusly eliminating additional outside resources which would be required to comply with these requests.


The benefits of an Information Management Solution

Modern email archiving solutions have become highly credible information management solutions: these solutions include modules for policy, retention management, compliance, and discovery. An information management solution archives emails based on adherence to rules-based policies – which are spelled-out in clear natural language rule sets – and automatically applies retention and disposition strategies. The users aren’t required to do anything, nor are their preferred environments compromised.

These solutions can eliminate the need for locally-stored emails because they will proactively archive email yet provide users a direct way to access those stored emails, eliminating the need for any local storage. To alleviate the need for additional storage for archived email, these solutions include compaction routines which automatically compress emails for archiving and conversely decompress them when they are accessed.
The preferred information management solutions use a “manage in place” strategy, wherein policies and retention management will be applied regardless of where an email is found (live, local, or archived). This ensures that IT has a consistent understanding of the landscape of stored emails.

Preferred information management solutions also offer search and discovery capabilities. Users naturally engage search engines to retrieve older, archived emails, and search must be part of the information management solution. More sophisticated search capabilities, under the requirements of discovery, must also be provided, wherein legal professionals can query email archives and mailboxes to locate and catalog potentially-relevant emails in the face of litigation. Finally, these solutions need to offer a preservation mechanism that permits authorized personnel to place such emails under legal hold, such that the email, any attachments, and all relevant metadata are preserved and secured from further editing or modification.





Sunday 20 October 2013

Secure Data Transmission using Alternate Path in Ad hoc Network

 Secure Data Transmission using Alternate Path in Ad hoc Network
Er. Isha Nagpal
Assistant professor,DCSE, PPIMT,Hisar
 

Learning Objectives:
·        Introduction to Mobile Ad hoc Network
·        Types of Routing In MANET
·        Traditional Approach of Data Transfer in Unicast Transferring
·        Security Requirements of Mobile Ad-Hoc Network
Introduction
Mobile ad hoc networks (MANETs) consists of a collection of wireless mobile nodes which dynamically exchange data among themselves without the reliance on a fixed base station or a wired backbone network. The ad-hoc network provides lack of secure boundaries. An ad hoc network is the cooperative engagement of a collection of mobile nodes without the required intervention of any centralized access points i.e. no access points passing information between participants act as LAN which is built spontaneously as devices connected instead of relying on base stations to coordinate the flow of messages to each node in a network.
                 In MANETs communication between nodes is done through the wireless medium. Because nodes are mobile and may join or leave the network. MANETs have a dynamic topology. Nodes that are in transmission range of each other are called neighbours. Neighbours can send data directly to each other. However, when a node needs to send data to another non-neighbouring node, the data is routed through a sequence of multiple hops, with intermediate nodes acting as routers.

Types of Routing in MANET
1.1    Unipath Routing in MANET
In unipath routing, only a single route is used between a source and destination node. Routing protocols are used to find and maintain routes between source and destination nodes.
Two main classes of ad hoc routing protocols are table-based and on-demand protocols:
a) Table Based Protocols: Each node maintains a routing table containing routes to all nodes in the network. Nodes must periodically exchange messages with routing information to keep routing tables up-to-date. Therefore, routes between nodes are computed and stored, even when they are not needed.
b) On Demand Protocols: Nodes only compute routes when they are needed. On-demand protocols consist of the following two main phases:
1. Route discovery: It is the process of finding a route between two nodes.
2. Route maintenance: It is the process of repairing a broken route or finding  a new route in the presence of a route failure.
Two of the most widely used protocols are the Dynamic Source Routing (DSR) and the Ad hoc On-demand Distance Vector (AODV) protocols. AODV and DSR are both on-demand protocols.
Dynamic Source Routing: DSR is an on-demand routing protocol for ad hoc networks. Like any source routing protocol, in DSR the source includes the full route in the packets’ header. The intermediate nodes use this to forward packets towards the destination and maintain a route cache containing routes to other nodes.
Route discovery: If the source does not have a route to the destination in its route cache, it broadcasts a route request (RREQ) message specifying the destination node for which the route is requested. The RREQ message includes a route record which specifies the sequence of nodes traversed by the message. When an intermediate node receives a RREQ, it checks to see if it is already in the route record. If it is, it drops the message. This is done to prevent routing loops. If the intermediate node had received the RREQ before, then it also drops the message. The intermediate node forwards the RREQ to the next hop according to the route specified in the header. When the destination receives the RREQ, it sends back a route reply message. If the destination has a route to the source in its route cache, then it can send a route response (RREP) message along this route.       Route maintenance: When a node detects a broken link while trying to forward a packet to the next hop, it sends a route error (RERR) message back to the source containing the link in error. When an RERR message is received, all routes containing the link in error are deleted at that node.
Ad Hoc On Demand Distance vector: AODV is an on-demand routing protocol for ad hoc networks. AODV uses hop-by-hop routing by maintaining routing table entries at intermediate nodes.
Route Discovery: The route discovery process is initiated when a source needs a route to a destination and it does not have a route in its routing table. To initiate route discovery, the source floods the network with a RREQ packet specifying the destination for which the route is requested. When a node receives an RREQ packet, it checks to see whether it is the destination or whether it has a route to the destination. If either case is true, the node generates an RREP packet, which is sent back to the source along the reverse path. When the source node receives the first RREP, it can begin sending data to the destination.
Route Maintenance: When a node detects a broken link while attempting to forward a packet to the next hop, it generates a RERR packet that is sent to all sources using the broken link. The RERR packet erases all routes using the link along the way. If a source receives a RERR packet and a route to the destination is still required, it initiates a new route discovery process.
1.2   Multipath Routing in MANETs
Standard routing protocols in ad hoc wireless networks, such as AODV and DSR, are mainly intended to discover a single route between a source and destination node. Multipath routing consists of finding multiple routes between a source and destination node.
1.2.1 Route Discovery and Maintenance: Route discovery and route maintenance consists of finding multiple routes between a source and destination node. Multipath routing protocols can attempt to find node disjoint, link disjoint, or non-disjoint routes. Node disjoint routes, also known as totally disjoint routes, have no nodes or links in common. Link disjoint routes have no links in common, but may have nodes in common. Non-disjoint routes can have nodes and links in common. From a fault tolerance perspective, more reliable paths should be selected to reduce the chance of routes failures. Path selection also plays an important role for QoS routing. In QoS routing, only a subset of paths that together satisfies the QoS requirement is selected.
1.2.2 Split Multipath Routing: Split Multipath Routing (SMR) proposed  is an on-demand multipath source routing protocol. SMR is similar to DSR, and is used to construct maximally disjoint paths. Unlike DSR, intermediate nodes do not keep a route cache, and therefore, do not reply to RREQs. This is to allow the destination to receive all the routes so that it can select the maximally disjoint paths. Maximally disjoint paths have as few links or nodes in common as possible. Duplicate RREQs are not necessarily discarded.
2. Security Issues in Mobile Ad hoc Network
As the data is transmitted over the adhoc there is no centralized manager for the adhoc network, because of this the chances of Intruder attach increase. The Attack can be in case of Unipath routing or in multipath, Even the topology is dynamic still it has many flaws in terms of security.
The Intruder attack is on the algorithmic approach of data transfer.  Some of the common attacks on security are:
1. Attacks using modification- False Sequence number
Malicious nodes can cause redirection of network traffic and DoS attacks by altering control message fields. In AODV, any node may divert traffic through itself by advertising a route to a node with a desti_sequence_num greater than the authentic value.
2. Attacks using modification – False hop counts.
AODV uses the hop count field to determine a shortest path Malicious nodes can set hop count to zero. DSR uses source routes in data packets DoS attack can be launched in DSR by altering the source routes in the packet headers.
3.Attacks using modification tunneling
A tunneling attack is where two or more nodes may collaborate to encapsulate messages between them.
Traditional Approach of Data Transfer in Unicast Transferring
According to a standard approach of communication between two nodes it is always based on the shortest path. The shortest path gives number of benefits like Easy implementation, Fast and reliable data transfer between nodes. One of the common algorithm for selecting the path is given below:
Path(A,n)
/* A is the Weighted graph of n size to represent the Ad hoc Network*/
{
      Step 1. Generate the neighbour list for the source node and put it in the matrix.
Step 2 .Starting from the first neighbour generate the next neighbour.
Step 3. Check if that neighbour already exist in the list if yes than it is a loopback and go to end;
Step 4. Generate the route from all the neighbours for the destination and continue  on that path.
Step 5. Generate the route to destination from all neighbours where ever possible.
Step 6. Compare the route length generated by all the possible routes. Compare all the routes in the distance matrix and choose the path to destination which has the lowest path length.
}
This approach of data transfer is very common in case of dynamic topology like the sensor network.  But as the intruder attacks according to the same approach it gives the very high chances of Data hack.

In this diagram, there are number of possible paths and as a reliable and fastest path , the client will always select the shortest path .But this approach has some problems based on security and reliability. Some of them are as follows:
1. Select One Shortest path
  • Use of wireless links in shortest path susceptible to link attacks
  • Relatively poor protection as in Battlefields.
  • Passive eavesdropping
  • Attacks from compromised attacks.

2. Multi Path
  • Flooding: As an incoming packet is sent on all incoming links,it limit the number of hops to avoid infinite loops or forward packets only once using a packet ID or only on selected links in the right direction
  • Multicasting: Terribly expensive in terms of resource utilization and results in minimum delay
 We are suggesting the alternate path approach that is close to the shortest path and more reliable and secure.
Security Requirements of Mobile Ad-Hoc Network
Requirements of Ad-Hoc Network are:
• Route signalling can’t be spoofed
• Fabricated routing messages can’t be injected into the network
• Routing messages can’t be altered in transit
• Routing loops can’t be formed by through malicious action
• Routes can’t be redirected from the shortest path by malicious action
• Unauthorized nodes should be excluded from route computation and discovery.
Path A to B
/* A is the Adjacency matrix representation of given network, n is the no of nodes and a,b are two nodes between we have to transfer data*/
Step 1. Give the range of the network node and set all other elements that are outside  the range to 0.
Step 2. Find the Neighbour of Each node of network starting from node a to node b.
Step 3.Find the shortest path from source to destination and store it in an array called array[ ].
Step 4. Search the neighbour list and pick a random node from the list and put that node in the array.
Step 5. Compare the random node with all the elements of the shortest path array. If the array[top] element matches with any of the elements in the list then Make the entry corresponding to that node in neighbour array.
Step 6. Compare the neighbour list of the generated node with all the elements of array otherwise pick a random node from the list and put it in the array
}
Finally we get the list of nodes that provide a safe path in case of unicast, this pass is very closer to the shortest path but does not include any node from the shortest path list because of this it provide the secure transmission on the algorithm implementation attack of the Intruder.