Showing posts with label top 10 e journal. Show all posts
Showing posts with label top 10 e journal. Show all posts

Thursday, 17 September 2015

A Novel Management Framework for Policy Anomaly in Firewall


IJSRD Found good research work on Computer Science & Engineering Research Area related to Anomaly in Firewall.

Abstract
The advent of emerging technologies such as Web services, service-oriented architecture, and cloud computing has enabled us to perform business services more efficiently and effectively. However, we still suffer from unintended security leakages by unauthorized actions in business services. Firewalls are the most widely deployed security mechanism to ensure the security of private networks in most businesses and institutions. The effectiveness of security protection provided by a firewall mainly depends on the quality of policy configured in the firewall. Unfortunately, designing and managing firewall policies are often error-prone due to the complex nature of firewall configurations as well as the lack of systematic analysis mechanisms and tools. In this paper, we represent an innovative policy anomaly management framework for firewalls, adopting a rule-based segmentation technique to identify policy anomalies and derive effective anomaly resolutions. We also discuss a proof-of-concept implementation of a visualization-based firewall policy analysis tool called Firewall Anomaly Management Environment (FAME). In addition, we demonstrate how efficiently our approach can discover and resolve anomalies in firewall policies through rigorous experiments using Automatic rule generation technique.

Key words: FAME, policy anomaly, firewall, segment

I. PROPOSED WORK AND SYSTEM ARCHITECTURE 


A distributed firewall preserves central control of access policy, which eliminates the dependency on topology. The proposed work introduces new ARG (Automatic Rule Generation) algorithm for distributed firewalls. The ARG algorithm proposed for automatically generating rules, detecting and resolving policy anomaly in distributed firewalls. By automating the task of administrator in distributed environment, it reduces the complexity and increases flexibility.[1]

The proposed system architecture in Fig.1 which has the following advantages: (i) No restriction for topological boundary. (ii) Automatic rule generation detects and resolves the policy anomalies in distributed firewalls. (iii) Eliminates redundancy (iv) Reduces complexity and increases flexibility.
In the proposed work, rules and actions are generated or modified according to the changes in the requirements of the dynamic environment. When a client sends a data packet to network, firewall checks the packet characteristics and decides to allow/deny the packet flow into the network. [1] The firewall rule anomalies are identified using packet space segmentation technique, and then the risk of anomalies is assessed, based upon the risk, the firewall rules are re-ordered. Risk assessment is measured using an upper bound and lower bound threshold values.
Fig. Data Flow Diagram

 The proposed work includes the following stages:
  Automatic rule generation
  Packet Space Segmentation
  Action Constraint Generation
  Rule Reordering
  Data Package

 A. Automatic Rule Generation:


When the client wants to send data packets to the network, some set of firewall rules should be satisfied to allow the packets in Fig 2. For this, network administrators from different location allocate certain firewall rules to the server. Here generation of firewall rules and actions are done automatically. This process is performed by taking certain specifications and constraints. [1] The specification are taken and mapped randomly to generate the firewall rules. The rules are generated in the rule engine, the action happens when a client sends data packet to rule engine.


For More Click Here...

WebSite: www.ijsrd.com

Friday, 21 August 2015

Emergent Artificial Intelligence

What happens when a computer can learn on the job?
Artificial intelligence (AI) is, in simple terms, the science of doing by computer the things that people can do. Over recent years, AI has advanced significantly: most of us now use smartphones that can recognize human speech, or have travelled through an airport immigration queue using image-recognition technology. Self-driving cars and automated flying drones are now in the testing stage before anticipated widespread use, while for certain learning and memory tasks, machines now outperform humans. Watson, an artificially intelligent computer system, beat the best human candidates at the quiz game Jeopardy.
Artificial intelligence, in contrast to normal hardware and software, enables a machine to perceive and respond to its changing environment. Emergent AI takes this a step further, with progress arising from machines that learn automatically by assimilating large volumes of information. An example is NELL, the Never-Ending Language Learning project from Carnegie Mellon University, a computer system that not only reads facts by crawling through hundreds of millions of web pages, but attempts to improve its reading and understanding competence in the process in order to perform better in the future.
Like next-generation robotics, improved AI will lead to significant productivity advances as machines take over – and even perform better – at certain tasks than humans. There is substantial evidence that self-driving cars will reduce collisions, and resulting deaths and injuries, from road transport, as machines avoid human errors, lapses in concentration and defects in sight, among other problems. Intelligent machines, having faster access to a much larger store of information, and able to respond without human emotional biases, might also perform better than medical professionals in diagnosing diseases. The Watson system is now being deployed in oncology to assist in diagnosis and personalized, evidence-based treatment options for cancer patients.
Long the stuff of dystopian sci-fi nightmares, AI clearly comes with risks – the most obvious being that super-intelligent machines might one day overcome and enslave humans. This risk, while still decades away, is taken increasingly seriously by experts, many of whom signed an open letter coordinated by the Future of Life Institute in January 2015 to direct the future of AI away from potential pitfalls. More prosaically, economic changes prompted by intelligent computers replacing human workers may exacerbate social inequalities and threaten existing jobs. For example, automated drones may replace most human delivery drivers, and self-driven short-hire vehicles could make taxis increasingly redundant.
On the other hand, emergent AI may make attributes that are still exclusively human – creativity, emotions, interpersonal relationships – more clearly valued. As machines grow in human intelligence, this technology will increasingly challenge our view of what it means to be human, as well as the risks and benefits posed by the rapidly closing gap between man and machine.
independent.academia.edu/IJSRD
ijsrdindia.blogspot.com/
www.ijsrd.com
http://www.ijsrd.com/SubmitManuscript

Tuesday, 18 August 2015

Fuel cell vehicles

Zero-emission cars that run on hydrogen
“Fuel cell” vehicles have been long promised, as they potentially offer several major advantages over electric and hydrocarbon-powered vehicles. However, the technology has only now begun to reach the stage where automotive companies are planning to launch them for consumers. Initial prices are likely to be in the range of $70,000, but should come down significantly as volumes increase within the next couple of years.
Unlike batteries, which must be charged from an external source, fuel cells generate electricity directly, using fuels such as hydrogen or natural gas. In practice, fuel cells and batteries are combined, with the fuel cell generating electricity and the batteries storing this energy until demanded by the motors that drive the vehicle. Fuel cell vehicles are therefore hybrids, and will likely also deploy regenerative braking – a key capability for maximizing efficiency and range.
Unlike battery-powered electric vehicles, fuel cell vehicles behave as any conventionally fuelled vehicle. With a long cruising range – up to 650 km per tank (the fuel is usually compressed hydrogen gas) – a hydrogen fuel refill only takes about three minutes. Hydrogen is clean-burning, producing only water vapour as waste, so fuel cell vehicles burning hydrogen will be zero-emission, an important factor given the need to reduce air pollution.
There are a number of ways to produce hydrogen without generating carbon emissions. Most obviously, renewable sources of electricity from wind and solar sources can be used to electrolyse water – though the overall energy efficiency of this process is likely to be quite low. Hydrogen can also be split from water in high-temperature nuclear reactors or generated from fossil fuels such as coal or natural gas, with the resulting CO2 captured and sequestered rather than released into the atmosphere.
As well as the production of cheap hydrogen on a large scale, a significant challenge is the lack of a hydrogen distribution infrastructure that would be needed to parallel and eventually replace petrol and diesel filling stations. Long distance transport of hydrogen, even in a compressed state, is not considered economically feasible today. However, innovative hydrogen storage techniques, such as organic liquid carriers that do not require high-pressure storage, will soon lower the cost of long-distance transport and ease the risks associated with gas storage and inadvertent release.
Mass-market fuel cell vehicles are an attractive prospect, because they will offer the range and fuelling convenience of today’s diesel and petrol-powered vehicles while providing the benefits of sustainability in personal transportation. Achieving these benefits will, however, require the reliable and economical production of hydrogen from entirely low-carbon sources, and its distribution to a growing fleet of vehicles (expected to number in the many millions within a decade).
                                                     http://goo.gl/yN1Ijg
                                                   https://goo.gl/BxFD7U
                                                   https://goo.gl/Kc6p5M
                                                    http://goo.gl/sIgs2u
                                                  https://goo.gl/iJF19D
                                                  http://goo.gl/R2jy3u
                                                 https://goo.gl/JyrGZE
http://www.ijsrd.com/SubmitManuscript

Tuesday, 11 August 2015

Special Issue For Image Processing



Best 25 papers will be published online.Participate in this special issue and get a chance to win the Best Paper Award for Image Processing. Also other authors will have special prizes to be won.

What is Image Processing?
Image processing is a method to convert an image into digital form and perform some operations on it, in order to get an enhanced image or to extract some useful information from it. It is a type of signal dispensation in which input is image, like video frame or photograph and output may be image or characteristics associated with that image. Usually Image Processingsystem includes treating images as two dimensional signals while applying already set signal processing methods to them. 
It is among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core research area within engineering and computer science disciplines too.Image processing usually refers to digital image processing, but optical and analog image processing also are possible.
Analog or visual techniques of image processing can be used for the hard copies like printouts and photographs. Image analysts use various fundamentals of interpretation while using these visual techniques. The image processing is not just confined to area that has to be studied but on knowledge of analyst. Association is another important tool in image processing through visual techniques. So analysts apply a combination of personal knowledge and collateral data to image processing.
Digital Processing techniques help in manipulation of the digital images by using computers. As raw data from imaging sensors from satellite platform contains deficiencies. To get over such flaws and to get originality of information, it has to undergo various phases of processing. The three general phases that all types of data have to undergo while using digital technique are Pre- processing, enhancement and display, information extraction.
If you have worked on any part of image processing prepare a research paper and submit to us
Image processing basically includes the following three steps.
  • Importing the image with optical scanner or by digital photography.The acquisition of images (producing the input image in the first place) is referred to as imaging.
  • Analyzing and manipulating the image which includes data compression and image enhancement and spotting patterns that are not to human eyes like satellite photographs.
  • Output is the last stage in which result can be altered image or report that is based on image analysis.

Purpose of Image processing
The purpose of image processing is divided into various groups. They are:
  • Visualization - Observe the objects that are not visible.
  • Image sharpening and restoration - To create a better image.
  • Image retrieval - Seek for the image of interest.
  • Measurement of pattern – Measures various objects in an image.
  • Image Recognition – Distinguish the objects in an image.

Applications of Image processing
Image processing has been an important stream of Research for various fields. Some of the application areas of Image processing are….
Intelligent Transportation Systems – E.g. Automatic Number Plate Recognition, Traffic Sign Recognition
Remote Sensing –E.g.Imaging of earth surfaces using multi Spectral Scanners/Cameras, Techniques to interpret captured images etc.
Object Tracking – E.g. Automated Guided Vehicles, Motion based Tracking, Object Recognition
 Defense surveillance – E.g. Analysis of Spatial Images, Object Distribution Pattern Analysis of Various wings of defense. Earth Imaging using UAV etc.
 Biomedical Imaging & Analysis – E.g. Various Imaging using X- ray, Ultrasound, computer aided tomography (CT) etc. Disease Prediction using acquired images, Digital mammograms.etc.
Automatic Visual Inspection System – E.g.Automatic inspection of incandescent lamp filaments, Automatic surface inspection systems,    Faulty component identification etc.
And many other applications…..
To contribute your research work in Image processing please prepare an article on it and submit to us. 

http://www.ijsrd.com/SpecialIssuehttp://www.ijsrd.com/SubmitManuscript