Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


18th week of 2016 patent applcation highlights part 47
Patent application numberTitlePublished
20160124701MIRROR DISPLAY SYSTEM AND MIRROR DISPLAY METHOD - A mirror display system and a mirror display method are present. The mirror display system comprises a transporting device and a receiving device. The mirror display method comprises following steps of: establishing a network connection with a receiving device at the transporting device; loading a plurality of contents; transforming the plurality of contents into a plurality of display data; transporting the plurality of display data to the receiving device via network; merging the plurality of display data into an output display data at the receiving device; transporting the output display data to a display unit for displaying. This application allows user to watch current screens of execution of the contents of the transporting device via a single display unit.2016-05-05
20160124702Audio Bookmarking - A system and method for identifying temporal positions in audio data and accessing the identified temporal positions are disclosed. “Audio bookmarks” are created by using various types of input, such as accessing a printed control with a smart pen, providing a voice command to the smart pen or providing a written command to the smart pen to identify temporal positions within the audio data. Alternatively, one or more rules are applied to audio data by a pen-based computing system to identify temporal positions in the audio data. The audio bookmarks are associated with one or more visual, auditory or tactile indicators showing the location of the audio bookmarks in the audio data. When an indicator is accessed, a portion of the audio data is played beginning from the temporal position associated with the accessed indicator. Additional data, such as written data, may also be associated with an indicator.2016-05-05
20160124703USER TERMINAL APPARATUS, DISPLAY APPARATUS CONNECTED TO USER TERMINAL APPARATUS, SERVER, CONNECTED SYSTEM, CONTROLLER, AND CONTROLLING METHOD THEREOF - A user terminal apparatus including: a display configured to display an execution screen of a music application; a user interface configured to receive a user command; a communicator configured to perform communication with an external display apparatus; and a processor configured to: provide, in response to the user command, a user interface (UI) screen including information regarding the external display apparatus, the external display apparatus being connected to the user terminal apparatus through a network, and control the communicator to transmit identification information of a music content provided on the music application execution screen to the external display apparatus selected on the UI screen.2016-05-05
20160124704METHOD AND ELECTRONIC DEVICE FOR STORING AUDIO DATA - An electronic device is provided. The electronic device includes a memory configured to store audio data, and a processor configured to assign a weight of each of audio data stored at the memory and delete a portion of selected audio data based on the weight of each of the audio data.2016-05-05
20160124705Communication Based on Operation Mode - Embodiments are provided for utilizing communication routes based operation mode. In an example implementation, while operating in a first operation mode, a playback device may communicate with a second playback device of the networked media system via a first route and a second route. The playback device may determine that the first playback device is to enter a second operation mode. Responsive to the determination, the playback device may (i) transmit, to the second playback device, a message to cause the second playback device to cease communication with the first playback device via the first route, and (ii) operate in the second operation mode.2016-05-05
20160124706SYSTEM AND METHOD FOR INITIATING MULTI-MODAL SPEECH RECOGNITION USING A LONG-TOUCH GESTURE - A system, method and computer-readable storage devices are disclosed for multi-modal interactions with a system via a long-touch gesture on a touch-sensitive display. A system operating per this disclosure can receive a multi-modal input comprising speech and a touch on a display, wherein the speech comprises a pronoun. When the touch on the display has a duration longer than a threshold duration, the system can identify an object within a threshold distance of the touch, associate the object with the pronoun in the speech, to yield an association, and perform an action based on the speech and the association.2016-05-05
20160124707Facilitating Interaction between Users and their Environments Using a Headset having Input Mechanisms - A headset is described herein for presenting audio information to the user as the user interacts with a space, e.g., as when a user navigates over a route within a space. The headset may include a set of input mechanisms for receiving commands from the user. The commands, in turn, invoke respective space-interaction-related functions to be performed by a space interaction (SI) module. The headset may operate with or without a separate user computing device.2016-05-05
20160124708APPARATUS, METHOD AND PROGRAM FOR CALCULATING THE RESULT OF A REPEATING ITERATIVE SUM - An apparatus, method and program are provided for calculating a result value to a required precision of a repeating iterative sum, wherein the repeating iterative sum comprises multiple iterations of an addition using an input value. Addition is performed in a single iteration of addition as a sum operation using overlapping portions of the input value and a shifted version of the input value, wherein the shifted version of the input value has a partial overlap with the input value. At least one result portion is produced by incrementing an input derived from the input value using the output from the sum operation and the result value is constructed using the at least one result portion to give the result value to the required precision. The repeating iterative sum is thereby flattened into a flattened calculation which requires only a single iteration of addition using the input value, thus facilitating the calculation of the result value of the repeating iterative sum.2016-05-05
20160124709FAST, ENERGY-EFFICIENT EXPONENTIAL COMPUTATIONS IN SIMD ARCHITECTURES - In one embodiment, a computer-implemented method includes receiving as input a value of a variable x and receiving as input a degree n of a polynomial function being used to evaluate an exponential function e2016-05-05
20160124710DATA PROCESSING APPARATUS AND METHOD USING PROGRAMMABLE SIGNIFICANCE DATA - An apparatus may have processing circuitry to perform one or more arithmetic operations for generating a result value based on at least one operand. For at least one arithmetic operation, the processing circuitry is responsive to programmable significance data indicative of a target significance for the result value, to generate the result value having the target significance. For example, this allows programmers to set a significance boundary for the arithmetic operation so that it is not necessary for the processing circuitry to calculate bit values having a significance outside the specified boundary, enabling a performance improvement.2016-05-05
20160124711SIGNIFICANCE ALIGNMENT - A data processing system uses alignment circuitry to align input operands in accordance with a programmable significance parameter to form aligned input operands. The aligned input operands are supplied to arithmetic circuitry, such as an integer adder or an integer multiplier, where a result value is formed. The result value is stored in an output operand storage element, such as a result register. The programmable significance parameter is independent of the result value.2016-05-05
20160124712EXPONENT MONITORING - A processing apparatus 2016-05-05
20160124713FAST, ENERGY-EFFICIENT EXPONENTIAL COMPUTATIONS IN SIMD ARCHITECTURES - In one embodiment, a computer-implemented method includes receiving as input a value of a variable x and receiving as input a degree n of a polynomial function being used to evaluate an exponential function e2016-05-05
20160124714EXCEPTION GENERATION WHEN GENERATING A RESULT VALUE WITH PROGRAMMABLE BIT SIGNIFICANCE - A data processing system performs processing operations upon input operand(s) having a programmable bit significance. Exception generating circuitry generates exception indications representing exceptions such as overflow, underflow and inexact in respect of a result value having the programmable bit significance.2016-05-05
20160124715MULTI-ELEMENT COMPARISON AND MULTI-ELEMENT ADDITION - An apparatus 2016-05-05
20160124716Deriving Entropy From Multiple Sources Having Different Trust Levels - Apparatus and method for generating random numbers. In accordance with some embodiments, a first multi-bit string of entropy values is derived from a first entropy source having a first trust level and a different, second multi-bit string of entropy values is derived from a second entropy source having a different, second trust level. The first and second multi-bit strings of entropy values are combined in relation to the associated first and second trust levels to generate a multi-bit random number. The multi-bit random number is used as an input to a cryptographic function.2016-05-05
20160124717Method and System of Improved Galois Multiplication - Embodiments of the invention include an apparatus for performing Galois multiplication using an enhanced Galois table. Galois multiplication may include converting a first and second multiplicand to exponential forms using a Galois table, adding the exponential forms of the first and second multiplicands, and converting the added exponential forms of the first and second multiplicands to a decimal equivalent binary form using the Galois table to decimal equivalent binary result of the Galois multiplication.2016-05-05
20160124718CONTEXT-BASED GENERATION OF MEMORY LAYOUTS IN SOFTWARE PROGRAMS - The disclosed embodiments provide a system that facilitates execution of a software program. During operation, the system determines a structure of a software program and an execution context for the software program from a set of possible execution contexts for the software program. Next, the system generates memory layouts for a set of object instances in the software program at least in part by applying the execution context to the structure independently of a local execution context on the computer system. The system then stores the memory layouts in association with the software program.2016-05-05
20160124719METHOD AND APPARATUS FOR CODE VIRTUALIZATION AND REMOTE PROCESS CALL GENERATION - An apparatus and method for code virtualization and remote process call code on a user device. A method for remote process call generation comprises sending a collection of remote processes comprising of at least one selectable remote process, where each of the remote processes are correlated to at least one remote service. The method further comprises generating a code snippet for execution on the at least one user device, in response to selection of at least one remote process at the user device. The code snippet comprises a call, which when executed on the at least one user device, causes execution of the remote process on the server. The method further comprises sending the code snippet to the at least one user device, and executing the remote process in response to receiving the call at the server, where execution of the remote process causes the remote service to be performed.2016-05-05
20160124720MULTI-STEP AUTO-COMPLETION MODEL FOR SOFTWARE DEVELOPMENT ENVIRONMENTS - Systems and methods for providing auto-completion functionality in a source code editor are described. In accordance with the systems and methods, code entities that are candidates for auto-completion are presented to a user via multiple auto-completion menus that are accessed in steps rather than via a single auto-completion menu. The multiple auto-completion menus include at least a first menu and a second menu. The first menu includes a common portion (e.g., a common prefix) of a subset of the candidate code entities. The second menu includes the subset of the candidate code entities and is presented when the user selects the common portion from the first menu.2016-05-05
20160124721BUILD AND COMPILE STATISTICS FOR DISTRIBUTED ENGINEERS - The present technology adds code to a top level build configuration file of a configuration program that will gather metrics for each invocation of a build. These metrics are sent to a commonly accessible metric server for future analysis. The metrics are collected for a distributed engineering team over several machines. Compilation time metrics may then be collected for each compilation event and those metrics are analyzed by a common aggregator.2016-05-05
20160124722JSON STYLESHEET LANGUAGE TRANSFORMATION - Systems and methods are provided for specifying transformations of JSON objects using other JSON objects. A first object is received specified using JavaScript Object Notation. The first object includes a set of one or more attributes where each attribute is of a predetermined JSON data type and has at least one value. A second object is also received specified using JavaScript Object Notation. The second object includes a set of one or more attributes each corresponding to at least one attribute in the set of attributes of the first object and having at least one value defining one or more transformations. A third object specified using JavaScript Object Notation is generated based on transforming the first object using the second object.2016-05-05
20160124723GRAPHICALLY BUILDING ABSTRACT SYNTAX TREES - Graphically building an abstract syntax tree includes displaying a user interface comprising a canvas and a presentation pane; displaying on the canvas a depiction of the abstract syntax tree; and displaying within the presentation pane a plurality of elements, wherein each of the plurality of elements is selectable as a visual node to add to the depiction of the abstract syntax tree. The method also includes receiving first input indicating selection of one of the plurality of elements; and adding to the canvas a first node in the depiction of the abstract syntax tree with a first visual label related to the one of the plurality of elements.2016-05-05
20160124724AUTOMATED CODE ANALYZER - A system for analyzing source code may include a computer including a memory and a processor. A discoverer may be stored on the memory and may be configured to automatically identify applications of an infrastructure and extract at least one input source code file corresponding to the identified applications. A file reader may be stored on the memory and may be configured to read the input source code file containing source code written in at least one computer programming language. A metrics accumulator may be stored on the memory and may be configured to analyze the source code components according to one or more rules to generate application metadata. A reporting engine may be stored on the memory and configured to generate a report based on the generated application metadata.2016-05-05
20160124725IDENTIFYING IMPROVEMENTS TO MEMORY USAGE OF SOFTWARE PROGRAMS - The disclosed embodiments provide a system that facilitates the execution of a software program. During operation, the system determines a structure of a software program and an execution context for the software program from a set of possible execution contexts for the software program, wherein the software program includes one or more object instances. Next, the system uses the structure and the execution context to identify a portion of an object instance from the one or more object instances that is determined to inefficiently use memory space in the software program. The system then provides a refactoring of the object instance that reduces use of the memory space in the object instance.2016-05-05
20160124726UNIFIED DATA TYPE SYSTEM AND METHOD - A type system includes a dual representation for basic data types. One representation is the basic data type representation common to such basic built-in data types, known as an unboxed value type or simply as a value type. Each of the basic data types also has a boxed representation that can be stored in the object hierarchy of the type system. This dual representation can also be extended to user-defined types, so that user-defined types may exist both as an unboxed value type and as an object within the object hierarchy of the type system. This dual representation allows the compiler and/or runtime environment to select the most effective and efficient representation for the data type depending on the particular need at the moment.2016-05-05
20160124727Method for Checking and/or Transformation of a Computer Program with First-Class Static Functions - The invention relates to a method for checking and/or transformation of a computer program present in a programming language which supports first-class functions and in which a type check of the program or of at least a part of the program is performed in order to assign a type to each expression of the program or part of the program, the type consisting of a base type and a binding time. The set of base types comprises at least base types for describing simple values and a function type for describing functions, and the set of binding times comprises at least one static binding time and one dynamic binding time, and a function type is only accepted during the type check together with the static binding time.2016-05-05
20160124728COLLECTING PROFILE DATA FOR MODIFIED GLOBAL VARIABLES - A PGO compiler can instrument an executable to collect profile data from which global variables that were modified during the execution of a training executable can be identified. PGO optimization using a list of modified global variables identified from the profile data can be used to optimize a program in a second compilation phase. The global variables that were modified during the training run are identified by capturing a current snapshot of global variables and comparing their state to a baseline snapshot to ascertain the addresses of global variables that were modified. The addresses that changed can be mapped to global variable names to create a list of global variables that were modified during execution of the training executable. The list of global variables that have been modified can be to enable the compiler to perform optimizations such as but not limited to co-locate the modified global variables in memory.2016-05-05
20160124729DYNAMIC COLLECTION ATTRIBUTE-BASED COMPUTER PROGRAMMING LANGUAGE METHODS - Simplified handling of dynamic collections having a variable number of elements at run time is achieved by providing for specification of collective properties of dynamic collections by a programmer. Such collective properties are distinct from type-member properties of the collection that follow from the types and type qualifiers of its members. Preferably, such dynamic collections are attributes (i.e., members) of an application defined type.2016-05-05
20160124730HYBRID PARALLELIZATION STRATEGIES FOR MACHINE LEARNING PROGRAMS ON TOP OF MAPREDUCE - Parallel execution of machine learning programs is provided. Program code is received. The program code contains at least one parallel for statement having a plurality of iterations. A parallel execution plan is determined for the program code. According to the parallel execution plan, the plurality of iterations is partitioned into a plurality of tasks. Each task comprises at least one iteration. The iterations of each task are independent. Data required by the plurality of tasks is determined. An access pattern by the plurality of tasks of the data is determined. The data is partitioned based on the access pattern.2016-05-05
20160124731AUTOMATED CODE-GENERATION FOR CROSS-LANGUAGE DEVELOPMENT, TESTING, AND INTEGRATION - A system and method provide for easy sharing of data between different software languages. A method begins by creating a definition defining a data structure with a domain specific language. The definition is then input to a code generator which generates data structures and algorithms in a first software language. The same generator software also creates equivalent data structures and algorithms in a second software language that is different than the first software language. The two output implementations provide compatible utilities for marshalling and de-marshalling data back and forth between the first software language and the second software language without requiring further manipulation of the two implementations.2016-05-05
20160124732OPTIMIZING DATA CONVERSION USING PATTERN FREQUENCY - Embodiments of the present invention provide systems and methods for increasing the efficiency of data conversion in a coprocessor by using the statistical occurrence of data patterns to convert frequently occurring data patterns in one conversion cycle. In one embodiment, a coprocessor system is disclosed containing a converter engine, which includes a parser and a converter, an input buffer, and a result store. The input buffer is configured to transfer a set of source data to the converter engine, which converts the source data from first code format to a second code format, and sends the converted source data to the result store.2016-05-05
20160124733REWRITING SYMBOL ADDRESS INITIALIZATION SEQUENCES - A system includes a memory to store a linker and one or modules, and a processor, communicatively coupled to the memory. The computer system is configured to recognize a first symbol address initialization sequence in a module. The system determines whether the first symbol address initialization sequence is a candidate for replacement, determines whether to replace the first symbol address initialization sequence with a second symbol address initialization sequence, and replaces the first symbol address initialization sequence with the second symbol address instruction sequence when it is determined to replace the first symbol address initialization sequence with the second symbol address initialization sequence.2016-05-05
20160124734COMMON DEPLOYMENT MODEL - In one implementation, a system for a common deployment model includes a content engine to embrace content from a number of deployment tools, a properties engine to associate a number of properties from the content to generate a component model for the number of deployment tools, a cost engine to associate the component model with a cost model, and a fulfillment engine to instantiate the component model with the associated cost model.2016-05-05
20160124735WORKLOAD DEPLOYMENT DENSITY MANAGEMENT FOR A MULTI-STAGE COMPUTING ARCHITECTURE IMPLEMENTED WITHIN A MULTI-TENANT COMPUTING ENVIRONMENT - Embodiments of the present invention provide a method, system and computer program product for workload deployment density management for a multi-stage architecture implemented within a multi-tenant computing environment. The method includes receiving different requests from different tenants of a multi-tenant computing environment to deploy respectively different application instances of respectively different computer programs into different nodes of the host computing system. The method also includes determining from each request an associated stage of a software lifecycle for a corresponding one of the application instances. Finally, the method includes deploying each of the application instances into a particular one of the nodes depending upon an associated stage of each of the application instances so that each of the nodes hosts different application instances for different tenants of a common stage of the software lifecycle.2016-05-05
20160124736SCRIPT GENERATION ENGINE AND MAPPING SEMANTIC MODELS FOR TARGET PLATFORM - The present invention is an installation script generation engine. An application component distribution system can include a repository of semantic models for interdependent ones of application components. A mapping of individual listings in the semantic models to target platform specific installation instructions further can be included. Finally, a script generation engine can be configured to produce a target specific set of instructions for a specified application component based upon a mapping of at least one of the semantic models in the repository. Notably, each of the semantic models can include a listing of component relationships, target platform requirements and platform neutral installation instructions. Moreover, the component relationships can include at least one component relationship selected from the group consisting of a containment relationship, a usage relationship, a contradiction relationship, and an equivalence relationship. Finally, a Web services interface to the repository can be configured to permit remote access to the repository.2016-05-05
20160124737AUTOMATED GENERATION OF AN APPLIANCE FOR A COMPUTING MACHINE - A computer implemented method for generating an appliance for a computing machine comprises: running a builder accessible by a user; the builder providing a selection of settings for configuring a system platform to the user; the builder providing a selection of applications to the user; the user choosing and adjusting system platform configuration settings from the selection of settings for configuring a system platform to the user; the user choosing at least one application from the selection of applications; the builder evaluating kernel modules and parameters required for running the at least one chosen application with the chosen and adjusted platform configuration settings; the builder evaluating system features required for running the at least one chosen application with the chosen and adjusted platform configuration settings; the builder composing a kernel component with the evaluated kernel modules and parameters; the builder composing a system platform initializing component with the evaluated system features; the builder assembling an appliance image comprising a boot loader, the kernel component, the system platform initializing component and the at least one chosen application. The method according to the invention allows for providing tailored, fast and low resource demanding appliances.2016-05-05
20160124738TABLET BASED AIRBORNE DATA LOADER - A method for data loading with a tablet includes receiving one or more software updates published by a ground station and storing the software updates on the tablet. The method also includes establishing communications between the tablet and a tablet interface module of an aircraft and determining if software on the aircraft needs to be updated. Based on determining that the software on the aircraft needs to be updated, the method includes transmitting at least one of the one or more software updates to the tablet interface module for loading into the aircraft and monitoring an installation process of the software updates in the aircraft via the tablet interface module.2016-05-05
20160124739Minimizing Image Copying During Partition Updates - Disclosed are apparatus and methods for updating binary images. A computing device can determine transfers for updating a binary source image to become a binary target image. A transfer can include a source memory reference for the source image and a target memory reference for the target image. The computing device can determine a graph based on ordering dependencies between the transfers. The graph can include vertices for the transfers with edges between vertices. The computing device can generate an edge from a first vertex for a first transfer to a second vertex for a second transfer, with the first transfer to be performed before the second transfer. The computing device can break any cycles present in the graph to obtain an acyclic graph. The computing device can order the transfers based on the acyclic graph and send the ordered transfers in an update package for the source image.2016-05-05
20160124740DATA STORAGE DEVICE AND METHOD FOR REDUCING FIRMWARE UPDATE TIME AND DATA PROCESSING SYSTEM INCLUDING THE DEVICE - A data storage device for reducing a firmware update time includes a non-volatile memory configured to store a firmware update image which will replace a current firmware image, a first volatile memory, and a processor configured to control an operation of the non-volatile memory and an operation of the first volatile memory. When a first code included in the current firmware image is executed by the processor, the first code generates data necessary for an operation of the data storage device and stores the data in the first volatile memory. When a second code included in the firmware update image is executed by the first code, the second code accesses and uses the data that has been stored in the first volatile memory.2016-05-05
20160124741ORCHESTRATION OF SOFTWARE APPLICATIONS UPGRADE USING AUTOMATIC HANG DETECTION - In an upgrade infrastructure performing an overall upgrade operation comprising multiple upgrade processes being executed, possibly concurrently, on multiple hosts for upgrading one or more software applications hosted by hosts, automated hang detection mechanisms are disclosed for quickly, efficiently, and automatically detecting when one or more of the upgrade process are in a hang state. Different hang detection techniques are described including a metadata-driven hang detection mechanism and a code-driven hang detection mechanism.2016-05-05
20160124742MICROSERVICE-BASED APPLICATION DEVELOPMENT FRAMEWORK - In one example, an application development framework system comprises a microservice platform for developing and executing a plurality of microservices, wherein each microservice of the microservices comprises an independently-deployable service configured to execute one or more functions to fulfill an interface contract for an interface for the microservice; and an orchestration platform for developing and executing an orchestrator to orchestrate the microservices to execute an interconnection platform for a cloud-based services exchange configured to interconnect, using one or more virtual circuits, customers of the cloud-based services exchange.2016-05-05
20160124743COMPATIBILITY AND OPTIMIZATION OF WEB APPLICATIONS ACROSS INDEPENDENT APPLICATION STORES - Systems and methods may provide for identifying a set of configuration options associated with a plurality of different application stores and using the set of configuration options to generate one or more compatibility suggestions for a web application in a development environment. Additionally, the set of configuration options may be used to generate one or more optimization suggestions for the web application in the development environment, wherein the one or more compatibility suggestions and the one or more optimization suggestions may be specific to a particular application store in the plurality of different application stores. In one example, runtime information associated with web application is identified, wherein the runtime information is used to generate at least one of the one or more optimization suggestions.2016-05-05
20160124744SUB-PACKAGING OF A PACKAGED APPLICATION INCLUDING SELECTION OF USER-INTERFACE ELEMENTS - The present disclosure relates to methods and apparatuses for forming a packaged application based on a selected subset of user-interface elements. One example method includes receiving a selection of a subset of user-interface elements of a packaged application at a device, determining data of the packaged application associated with execution of the subset of user-interface elements, and packaging the data to form another packaged application for executing the subset of user-interface elements.2016-05-05
20160124745REFINING COMPOSITION INSTRUMENTATION FOR SERVICES - In an approach for creating a service composition, a processor receives a plurality of software modules, wherein each software module performs part of a service requested by one or more users on a network. A processor collects one or more attributes and one or more dependencies for each of the plurality of software modules. A processor appends information about the attributes and the dependencies to each respective software module. A processor stores each of the plurality of software modules with the respective appended information in a database. A processor creates a service composition comprised of a combination of the plurality of software modules, based on the appended information and the service requested by the one or more users on the network.2016-05-05
20160124746VECTOR OPERANDS WITH COMPONENT REPRESENTING DIFFERENT SIGNIFICANCE PORTIONS - A data processing system supports vector operands with components representing different bit significance portions of an integer number. Processing circuitry performs a processing operation specified by a program instruction in dependence upon a number of components comprising the vector as specified by metadata for the vector.2016-05-05
20160124747PERFORMING A CLEAR OPERATION ABSENT HOST INTERVENTION - Optimizations are provided for frame management operations, including a clear operation and/or a set storage key operation, requested by pageable guests. The operations are performed, absent host intervention, on frames not resident in host memory. The operations may be specified in an instruction issued by the pageable guests.2016-05-05
20160124748NONTRANSACTIONAL STORE INSTRUCTION - A NONTRANSACTIONAL STORE instruction, executed in transactional execution mode, performs stores that are retained, even if a transaction associated with the instruction aborts. The stores include user-specified information that may facilitate debugging of an aborted transaction.2016-05-05
20160124749COALESCING ADJACENT GATHER/SCATTER OPERATIONS - According to one embodiment, a processor includes an instruction decoder to decode a first instruction to gather data elements from memory, the first instruction having a first operand specifying a first storage location and a second operand specifying a first memory address storing a plurality of data elements. The processor further includes an execution unit coupled to the instruction decoder, in response to the first instruction, to read contiguous a first and a second of the data elements from a memory location based on the first memory address indicated by the second operand, and to store the first data element in a first entry of the first storage location and a second data element in a second entry of a second storage location corresponding to the first entry of the first storage location.2016-05-05
20160124750Power-On Method and Related Server Device - A power-on method for a server device includes generating a stand-by power to a server module of the server device when a blade enable signal is asserted; asserting, by the server module, a power-on signal to a storage module of the server device; performing, by the storage module, a first boot-on process when the storage module receives the asserted power-on signal; transmitting, by the storage module, an asserted ready signal to the server module when the first boot-on process finishes; and performing, by the server module, a second boot-on process via a normal power when the server module receives the asserted ready signal.2016-05-05
20160124751ACCESS ISOLATION FOR MULTI-OPERATING SYSTEM DEVICES - The present application is directed to access isolation for multi-operating system devices. In general, a device may be configured using firmware to accommodate more than one operating system (OS) operating concurrently on the device or to transition from one OS to another. An access isolation module (AIM) in the firmware may determine a device equipment configuration and may partition the equipment for use by multiple operating systems. The AIM may disable OS-based equipment sensing and may allocate at least a portion of the equipment to each OS using customized tables. When transitioning between operating systems, the AIM may help to ensure that information from one OS is not accessible to others. For example, the AIM may detect when a foreground OS is to be replaced by a background OS, and may protect (e.g., lockout or encrypt) the files of the foreground OS prior to the background OS becoming active.2016-05-05
20160124752ELECTRONIC APPARATUS AND TEMPERATURE CONTROL METHOD THEREOF - An electronic apparatus and a temperature control method are provided. The steps of the temperature control method include: setting a preset temperature information, wherein the preset temperature information comprises a plurality of application program names and a plurality of respectively corresponding heat dissipation setting information about a heat dissipation ability of a heat dissipation apparatus; checking whether a name of an executed application program is one of the application program names or not, and selecting one of the heat dissipation setting information corresponding to the executed application program to be a selected heat dissipation information; and, driving the heat dissipation apparatus according to the selected heat dissipation setting information.2016-05-05
20160124753OPERATING SYSTEM LOAD DEVICE RESOURCE SELECTION - A method for booting is provided. A devices manager disables resources of a bootable device of a list of bootable devices having resource conflicts with a selected one of the list of bootable devices. The devices manager attempts to boot the selected bootable device. If the selected bootable device fails to boot, then the devices manager selects a next bootable device of the list of bootable devices for booting and repeats disabling resources and attempting to boot the selected next bootable device until one of the list of bootable devices boots or all bootable devices of the list of bootable devices fail to boot.2016-05-05
20160124754Virtual Function Boot In Single-Root and Multi-Root I/O Virtualization Environments - A method for virtual function boot in a system including a single-root I/O virtualization (SR-IOV) enabled server includes loading a PF driver of the PF of a storage adapter onto the server utilizing the virtual machine manager of the server; creating a plurality of virtual functions utilizing the PF driver, detecting each of the virtual functions on an interconnection bus, maintaining a boot list associated with the plurality of virtual functions, querying the storage adapter for the boot list utilizing a VMBIOS associated with the plurality of VMs, presenting the detected boot list to a VM boot manager of the VMM, and booting each of the plurality of virtual machines utilizing each of the virtual functions, wherein each VF of the plurality of VFs is assigned to a VM of the plurality of VMs via an interconnect passthrough between the VMM and the plurality of VMs.2016-05-05
20160124755COMPARISON-BASED SORT IN AN ARRAY PROCESSOR - An array processor includes a managing element having a load streaming unit coupled to multiple processing elements. The load streaming unit provides input data portions to each of a first subset of processing elements and receives output data from each of a second subset of the processing elements based on a comparatively sorted combination of the input data portions. Each processing element is configurable by the managing element to compare input data portions received from the load streaming unit or two or more of the other processing elements. Each processing unit can further select an input data portion to be output data based on the comparison, and in response to selecting the input data portion, remove a queue entry corresponding to the selected input data portion. Each processing element can provide the selected output data portion to the managing element or as an input to one of the processing elements.2016-05-05
20160124756KEYBOARD-ACCESSIBLE CALENDAR FOR INDIVIDUAL WITH DISABILITIES - Apparatus, methods and media for a keyboard-accessible calendar are provided. The apparatus may include, and the methods may provide, a display. The display may display a date field. The date field may receive a date input. The apparatus and methods may include a calendar layer. The calendar layer may be displayed on a screen. The calendar layer may include a pictorial calendar interface. The interface may include a plurality of dates. The apparatus may receive a set of instructions. The instructions may include a first instruction to select the date field. Upon selection of the date field, a cursor may be applied to the date field. The instructions may include a second instruction to toggle from the date field to the calendar layer. The toggling may include receiving a keystroke from the user. The toggling may include moving the focus of the cursor from the date field to the calendar layer.2016-05-05
20160124757MONITORING A MOBILE DEVICE APPLICATION - The present technology allows for a mobile device operating system to be modified in order to better monitor the performance of the mobile device applications. A mobile device file, such as a dex file for android operating system, may be extracted from an APK file for an application. The mobile device file may be analyzed, and a new mobile device file may be generated in addition to the analyzed mobile device file. The modifications may include identifying methods that should be monitored during execution of the corresponding application on a mobile device. The mobile device file, may be modified at a remote server, provided back to the mobile device, and then loaded by the mobile device at a later time.2016-05-05
20160124758A STANDALONE AUTOMATION DEVICE AND A MACHINE - A standalone automation device (2016-05-05
20160124759FACILITATING DEVICE DRIVER INTERACTIONS - Techniques are described for facilitating interactions with device driver modules. In at least some situations, the techniques include managing interactions between device driver modules and other programs or hardware devices so as to minimize disruptions related to the device driver modules, including when changes to existing device driver modules are made. Such device driver module changes may have various forms and may occur for various reasons, including to install new versions of device driver modules or otherwise upgrade existing device driver modules. Furthermore, the interactions with device driver modules may be managed in various manners, including to allow changes to occur to a device driver module while that device driver module is in use on a computing system, but without causing other programs on the computing system to be restarted or to lose existing connections to the device driver module being changed.2016-05-05
20160124760CLOUD COMPUTING SYSTEM AND METHOD - A system and method deploying cloud computing software applications and resources to mobile devices is disclosed. The system/method virtualizes the graphical user experience (GEX) and user input experience (UEX) that comprise the graphical user interface (GUI) for host application software (HAS) running on a host computer system (HCS). The virtualized GUI (VUI) GEX component is converted to a remote video stream (RVS) and communicated to a remote mobile computing device (MCD) over a computer communication network (CCN). A MCD thin client application (TCA) receives the RVS and presents this GEX content on the MCD display using a graphics experience mapper (GEM). A TCA user experience mapper (UEM) translates MCD user inputs to a form suitable for UEX protocols and communicates this user input over the CCN to the HCS for translation by the UEX into HCS operating system protocols compatible with the HAS.2016-05-05
20160124761IDLE BASED LATENCY REDUCTION FOR COALESCED INTERRUPTS - A guest operating system of a virtual machine sends a request to a hypervisor to coalesce interrupts from a networking device. The guest operating system then monitors the execution state of an application on the virtual machine to detect when the application becomes idle. Upon detecting that the application is idle, the guest operating system can send a request to the hypervisor for any coalesced interrupts that have been queued for delivery to the application. The guest operating system may then receive the coalesced interrupts from the hypervisor and deliver them to the application.2016-05-05
20160124762GUEST IDLE BASED VM REQUEST COMPLETION PROCESSING - A hypervisor identifies one or more interrupts of a networking device for a virtual machine. The hypervisor queues the interrupts and determines the execution state of at least one virtual processor of a virtual machine. Upon determining that the execution state of the virtual processor is active, the hypervisor continues queuing the interrupts of the networking device. Upon determining that the execution state of the virtual processor has changed to idle, the hypervisor provides the queued interrupts to the virtual machine.2016-05-05
20160124763LIMITED VIRTUAL DEVICE POLLING BASED ON VIRTUAL CPU PRE-EMPTION - A hypervisor executing on a computer system identifies a request of a guest operating system of a virtual machine associated with a shared device. The shared device comprises a shared memory space between a virtual processor of the virtual machine and the hypervisor and the virtual machine has a plurality of virtual processors. The hypervisor processes the request of the guest operating system and polls the shared device for additional requests of the guest operating system. Upon determining that there are no additional requests associated with the shared device to be processed, the hypervisor determines the execution state of each virtual processor of the virtual machine. The hypervisor disables polling the shared device for requests upon determining that at least one of the plurality of virtual processors has been pre-empted.2016-05-05
20160124764AUTOMATED GENERATION OF CLONED PRODUCTION ENVIRONMENTS - Methods and systems for managing, storing, and serving data within a virtualized environment are described. In some embodiments, a data management system may manage the extraction and storage of virtual machine snapshots, provide near instantaneous restoration of a virtual machine or one or more files located on the virtual machine, and enable secondary workloads to directly use the data management system as a primary storage target to read or modify past versions of data. The data management system may allow a virtual machine snapshot of a virtual machine stored within the system to be directly mounted to enable substantially instantaneous virtual machine recovery of the virtual machine.2016-05-05
20160124765RESOURCE ALLOCATION APPARATUS, METHOD, AND STORAGE MEDIUM - A dynamic resource allocation apparatus according to an embodiment includes a usage amount calculator, a spike detector, an allocation amount calculator, and an allocation amount setter. The usage amount calculator calculates a fixed usage amount which is a resource usage amount actually used for each time slot as a division of a resource fluctuation period of a virtual machine. The spike detector detects a spike of the fixed usage amount. The allocation amount calculator calculates a resource allocation amount to be allocated to the i-th time slot based on the past fixed usage amount in the i-th time slot and a detection result of a past spike in a time slot included in a predetermined range before and after the i-th time slot. The allocation amount setter sets an allocation amount to a virtual machine monitor which controls a virtual machine.2016-05-05
20160124766COOPERATED INTERRUPT MODERATION FOR A VIRTUALIZATION ENVIRONMENT - Generally, this disclosure describes systems (and methods) of moderating interrupts in a virtualization environment. An overflow interval is defined. The overflow interrupt interval is used to trigger activation of an inactive guest so that the guest may respond to a critical event. The guest, including a network application, may be active for a first time interval and inactive for a second time interval. A latency interrupt interval may be defined. The latency interrupt interval is configured for interrupt moderation when the network application associated with a packet flow is active, i.e., when the guest including the network application is active on a processor. Of course, many alternatives, variations, and modifications are possible without departing from this embodiment.2016-05-05
20160124767VIRTUAL MACHINE BASED CONTENT PROCESSING - A set of techniques is described for enabling a virtual machine based transcoding system. The system enables any transcoding provider to make their transcoding service available to other users over a network. The system can automate the deployment, execution and delivery of the transcoding service on behalf of the transcoding provider and enable other users to use the transcoding services to transcode content. The system receives a virtual machine image, transfers the image to a location where the media content is stored and creates a virtual private network of resources that will perform the transcoding of the media content. The virtual private network may be firewalled or otherwise restricted from opening connections with external clients when transcoding the content in order to prevent malicious use of the media content.2016-05-05
20160124768MAINTAINING VIRTUAL MACHINES FOR CLOUD-BASED OPERATORS IN A STREAMING APPLICATION IN A READY STATE - A streams manager monitors performance of a streaming application, and when the performance needs to be improved, the streams manager automatically requests virtual machines from a cloud manager. The cloud manager provisions one or more virtual machines in a cloud with the specified streams infrastructure and streams application components. The streams manager then modifies the flow graph so one or more portions of the streaming application are hosted by the virtual machines in the cloud. When performance of the streaming application indicates a virtual machine is no longer needed, the virtual machine is maintained and placed in a ready state so it can be quickly used as needed in the future without the overhead of deploying a new virtual machine.2016-05-05
20160124769MAINTAINING VIRTUAL MACHINES FOR CLOUD-BASED OPERATORS IN A STREAMING APPLICATION IN A READY STATE - A streams manager monitors performance of a streaming application, and when the performance needs to be improved, the streams manager automatically requests virtual machines from a cloud manager. The cloud manager provisions one or more virtual machines in a cloud with the specified streams infrastructure and streams application components. The streams manager then modifies the flow graph so one or more portions of the streaming application are hosted by the virtual machines in the cloud. When performance of the streaming application indicates a virtual machine is no longer needed, the virtual machine is maintained and placed in a ready state so it can be quickly used as needed in the future without the overhead of deploying a new virtual machine.2016-05-05
20160124770TRANSPORTATION NETWORK MICRO-SIMULATION PRE-EMPTIVE DECOMPOSITION - In a parallel computing method performed by a parallel computing system comprising a plurality of central processing units (CPUs), a main process executes. Tasks are executed in parallel with the main process on CPUs not used in executing the main process. Results of completed tasks are stored in a cache, from which the main process retrieves completed task results when needed. The initiation of task execution is controlled by a priority ranking of tasks based on at least probabilities that task results will be needed by the main process and time limits for executing the tasks. The priority ranking of tasks is from the vantage point of a current execution point in the main process and is updated as the main process executes. An executing task may be pre-empted by a task having higher priority if no idle CPU is available.2016-05-05
20160124771THROTTLING CIRCUITRY - Techniques are disclosed relating to processor power control and interrupts. In one embodiment, an apparatus includes a processor configured to assert an indicator that the processor is suspending execution of instructions until the processor receives an interrupt. In this embodiment, the apparatus includes power circuitry configured to alter the power provided to the processor based on the indicator. In this embodiment, the apparatus includes throttling circuitry configured to, in response to receiving a request from the power circuitry to alter the power provided to the processor, block the request until the end of a particular time interval subsequent to receipt of the request or de-assertion of the indicator. In some embodiments, the particular time interval corresponds to latency between the processor receiving an interrupt and de-asserting the indicator.2016-05-05
20160124772In-Flight Packet Processing - A method for supporting in-flight packet processing is provided. Packet processing devices (microengines) can send a request for packet processing to a packet engine before a packet comes in. The request offers a twofold benefit. First, the microengines add themselves to a work queue to request for processing. Once the packet becomes available, the header portion is automatically provided to the corresponding microengine for packet processing. Only one bus transaction is involved in order for the microengines to start packet processing. Second, the microengines can process packets before the entire packet is written into the memory. This is especially useful for large sized packets because the packets do not have to be written into the memory completely when processed by the microengines.2016-05-05
20160124773METHOD AND SYSTEM THAT MEASURES AND REPORTS COMPUTATIONAL-RESOURCE USAGE IN A DATA CENTER - The present disclosure describes methods and systems that monitor the utilization of computational resources. In one implementation, a system periodically measures the utilization of computational resources, determines an amount of computational-resource wastage, identifies the source of the wastage, and generates recommendations that reduce or eliminate the wastage. In some implementations, recommendations are generated based on a cost of the computational-resource wastage. The cost of computational-resource wastage can be determined from factors that include the cost of providing a computational resource, an amount of available computational resources, and the amount of actual computational-resource usage. Methods of presenting and modeling computational-resource usage and methods that associate an economic cost with resource wastage are presented.2016-05-05
20160124774CLUSTER RESOURCE MANAGEMENT IN A VIRTUALIZED COMPUTING ENVIRONMENT - Techniques for managing computing resources in a cluster are disclosed. In one embodiment, a method includes identifying a virtual machine requiring additional memory. The virtual machine runs on a first host computing system. Further, the method includes determining that the virtual machine does not need additional central processing unit (CPU) resources. Furthermore, the method includes identifying at least one other host computing system having the required additional memory and allocating the required additional memory available in the at least one other host computing system to the virtual machine using a connection to each host computing system having the required additional memory.2016-05-05
20160124775RESOURCE ALLOCATION CONTROL WITH IMPROVED INTERFACE - A computer system displays a user interface display with a user input mechanism that can be actuated in order to identify a set of resources, and corresponding capacities. A team configuration is stored in memory and reflects the configuration of the resources and corresponding capacities that were identified. A task dependency structure is obtained, and is indicative of an underlying project. Resources from the stored team configuration, and corresponding capacities, are assigned to the tasks in the task dependency structure and the team configuration is updated, in memory, to reflect the assignments. A display is generated that shows the state of the underlying memory, and that is indicative of a remaining capacity and a consumed capacity.2016-05-05
20160124776PROCESS FOR CONTROLLING A PROCESSING UNIT IMPROVING THE MANAGEMENT OF THE TASKS TO BE EXECUTED, AND CORRESPONDING PROCESSING UNIT - A process controls a processing unit in the presence of a task being executed by the processing unit. The processing unit includes at least one external input electrically connected to a corresponding output of the processing unit, and is associated with a level of priority of execution. The process includes, in the presence of an auxiliary-task request generated internally within the processing unit, generation by the processing unit of an auxiliary electrical signal corresponding to the request for execution of the auxiliary task. The auxiliary electrical signal is relayed to the at least one external input. A comparison is made between the priority levels respectively associated with the at least one external input and with the task being executed.2016-05-05
20160124777RESOURCE SUBSTITUTION AND REALLOCATION IN A VIRTUAL COMPUTING ENVIRONMENT - A host system reallocates resources in a virtual computing environment by first receiving a request to reallocate a first quantity of a first resource type. Next, potential trade-off groups are evaluated and a trade-off group is selected based on the evaluation. The selected trade-off group includes a set of applications running in the virtual computing environment that can use one or more alternate resource types as a substitute for the first quantity of the first resource type. After the selection, the host system reallocates the first quantity of the first resource type from the trade-off group. This reallocation may be made from the trade-off group to either a first application running in the virtual computing environment or the host system itself. If the reallocation is to the host system, then the total quantity of the first resource type allocated to applications running in the virtual computing environment is thereby reduced.2016-05-05
20160124778SYSTEM AND METHOD FOR PROVIDING DYNAMIC CLOCK AND VOLTAGE SCALING (DCVS) AWARE INTERPROCESSOR COMMUNICATION - Systems and methods that allow for Dynamic Clock and Voltage Scaling (DCVS) aware interprocessor communications among processors such as those used in or with a portable computing device (“PCD”) are presented. During operation of the PCD at least one data packet is received at a first processing component. Additionally, the first processing component also receives workload information about a second processing component operating under dynamic clock and voltage scaling (DCVS). A determination is made, based at least in part on the received workload information, whether to send the at least one data packet from the first processing component to the second processing component or to a buffer, providing a cost effective ability to reduce power consumption and improved battery life in PCDs with multi-cores or multi-CPUs implementing DCVS algorithms or logic.2016-05-05
20160124779LAUNCH CONTROL METHOD AND LAUNCH CONTROL APPARATUS - A launch control apparatus stores identification information indicating the launch control apparatus and progress information indicating a progress status of a database launch process including a plurality of processes that are to be executed in a predetermined order in the launch control apparatus, in a predetermined storage region of a memory device, in response to execution of each of the processes, when executing the database launch process using the memory device. The launch control apparatus determines whether or not to continue the launch process, on the basis of statuses of the stored identification information and progress information, with reference to the predetermined storage region, in response to completion of each of the processes.2016-05-05
20160124780Automatic Profiling Report Generation - A method for profiling an application on a virtual machine is provided. A series of analysis steps to be performed on profiled data can be created. The series of analysis steps can be saved as a report specification. A back-end profiler can then be caused to perform profiling on the application. Profiled data can be received from the back-end profiler. The profiled data can be stored as a model. The model can then be adapted based on the series of analysis steps from the report specification. Output data can be generated based on the adapted model. Finally, the output data is displayed to a user.2016-05-05
20160124781Creating and Using Service Control Functions - Concepts and technologies are disclosed herein for creating and using service control functions. The service control functions can detect a message via an adapter function. The message can relate to a service controlled by the service control functions. Service policies can be accessed. The service policies can include message handling policies and can be accessed to determine if a policy relating to the message exists. If a determination is made that the policy exists, the message and the policy can be analyzed to determine an action to take with respect the message, and the action can be initiated.2016-05-05
20160124782SYSTEMS AND METHODS FOR COMMUNICATION BETWEEN INDEPENDENT COMPONENT BLOCKS IN MOBILE APPLICATION MODULES - Embodiments of methods and devices for creating and using independent component blocks and application modules are described. One example embodiment involves a method where a first application module and a second application module each register with a uniform resource locator (URL) handler module of a mobile device to associate a protocol name with the respective application module. The modules may then use the protocol names to send messages to other application modules using the protocol names, with the messages being relayed or communicated via a communication routing module. Additional embodiments may involve various applications receiving these messages, which may be structured as URLs. The applications may then parse the messages using URL parsers in the application to identify commands and data within messages.2016-05-05
20160124783TRACKING ASYNCHRONOUS ENTRY POINTS FOR AN APPLICATION - Asynchronous operations associated with a request such as synchronous threads, runnable elements, callable elements, and other invokable objects are tracked to determine the metrics about the request and operations. The present technology tracks the start and end of each asynchronous operation and maintains a counter which tracks the currently executing asynchronous operations. By monitoring the request, the start and end of each asynchronous operation associated with the request, and the number of asynchronous operations currently executing, the present technology may identify the end of a request by identifying when the last asynchronous operation associated with the request ends. In some instances, the present technology identifies the end of a request when a counter which tracks the number of asynchronous operations executing reaches a value of zero after the first asynchronous operation has already begun.2016-05-05
20160124784SEMICONDUCTOR MEMORY DEVICES INCLUDING ERROR CORRECTION CIRCUITS AND METHODS OF OPERATING THE SEMICONDUCTOR MEMORY DEVICES - A memory controller includes a controller input/output circuit configured to output a first command to read first data, and output a second command to read an error corrected portion of the first data. A memory device includes: an error detector, a data storage circuit and an error correction circuit. The error detector is configured to detect a number of error bits in data read from a memory cell in response to a first command. The data storage circuit is configured to store the read data if the detected number of error bits is greater than or equal to a first threshold value. The error correction circuit is configured to correct the stored data.2016-05-05
20160124785SYSTEM AND METHOD OF SAFETY MONITORING FOR EMBEDDED SYSTEMS - The safety and integrity of an embedded computer system is monitored using an independent safety monitoring module in communication with the main controller module via a serial connection to a safety monitoring module proxy in the main controller module. The main controller module is monitored through the use of alive-telegram exchanges and computational challenges. The safety monitoring module also receives temperature information and supply voltage information about the main controller module. The monitored information may be evaluated using a prognostic model constructed using a simulation of failure modes off line.2016-05-05
20160124786METHODS FOR IDENTIFYING RACE CONDITION AT RUNTIME AND DEVICES THEREOF - A method, non-transitory computer readable medium, and device that identifies race condition at run time includes monitoring a client device processor during execution of an operation by the client device processor. An interrupt in the monitored client device processor is identified and a delay is introduced in the monitored client device processor during the execution of the monitored client device processor upon identifying the interrupt. A race condition in a completed operation is determined using information associated with the introduced delay. Information associated with the race condition is recorded when the completed operation is determined to have resulted in the race condition.2016-05-05
20160124787ELECTRONIC SYSTEM CONFIGURATION MANAGEMENT - A method for managing a configuration of an electronic system having a plurality of locations configured to receive hardware units is disclosed. The method may include receiving hardware unit parameters corresponding to hardware units currently installed and pending installation in the electronic system and retrieving configuration data for the electronic system. The method may also include generating a plurality of hardware unit times to failure (TTFs) for the plurality of locations by applying, to a failure prediction model, the hardware parameters for hardware units currently installed and pending installation and the configuration data for the electronic system. The method may also include using a selection criteria to select the plurality of hardware unit predicted TTFs corresponding to the plurality of locations and reporting at least one recommended hardware unit installation location from the plurality of locations within the electronic system.2016-05-05
20160124788METHOD FOR DETECTION OF SOFT MEDIA ERRORS FOR HARD DRIVE - Some embodiments are directed to a method, corresponding system, and corresponding apparatus for detecting unexpectedly high latency, due to excessive retries of a given storage device of a set of storage devices. Some embodiments may comprise a processor and associated memory. Some embodiments may monitor one or more completion time characteristics of one or more accesses between the given storage device and one or more host machines. Some embodiments may then compare the one or more completion time characteristics with a given threshold. As a result of the comparison, some embodiments may report, by the one or more host machines, at least one error associated with the given storage device. The error may be unreported by the set of storage devices.2016-05-05
20160124789COMMUNICATION SOFTWARE STACK OPTIMIZATION USING DISTRIBUTED ERROR CHECKING - A method of processing a request message begins when a first layer of a plurality of layers of a system stack receives the request message. In turn, the plurality of layers negotiate an agreement based on the request message, where the agreement indicates which layers will process particular error reply codes of an error reply code list. Then, a non-controller layer of the plurality of layers performs a first error check in accordance with the agreement and records a first error result in a communication interface based on the first error check; a controller layer of the plurality of layers performs a second error check in accordance with the agreement and records a second error result in the communication interface based on the second error check. Then a reply message responsive to the request message is outputted based on the first error check and the second error check.2016-05-05
20160124790COMMUNICATION SOFTWARE STACK OPTIMIZATION USING DISTRIBUTED ERROR CHECKING - A method of processing a request message begins when a first layer of a plurality of layers of a system stack receives the request message. In turn, the plurality of layers negotiate an agreement based on the request message, where the agreement indicates which layers will process particular error reply codes of an error reply code list. Then, a non-controller layer of the plurality of layers performs a first error check in accordance with the agreement and records a first error result in a communication interface based on the first error check; a controller layer of the plurality of layers performs a second error check in accordance with the agreement and records a second error result in the communication interface based on the second error check. Then a reply message responsive to the request message is outputted based on the first error check and the second error check.2016-05-05
20160124791IDENTIFYING ORIGIN AND DESTINATION PAIRS - The present disclosure relates to identifying an origin/destination pair. Aspects include identifying an origin/destination pair in a service, which includes determining a current time when the current operation is executed in response to failure of a current operation for recording an origin/destination pair. Aspects also include determining a previous time when a last operation was executed for recording an origin/destination pair and identifying a missing point causing failure of the current operation based on a time interval between the current time and the previous time.2016-05-05
20160124792FAULT ANALYSIS APPARATUS, FAULT ANALYSIS METHOD, AND RECORDING MEDIUM - An apparatus includes: a log element extraction unit that extracts a log element from log information a combined unit that attaches, to each of the log elements, related system constituent element information and combine the log elements; a pattern extraction unit that extracts a pattern from the combined log information; a conversion unit, when an analysis target pattern includes system constituent element information of conversion target not included in a comparison target pattern, that performs conversion between the system constituent element information of the conversion target and the system constituent element information similar to the conversion target in the comparison target pattern or the analysis target pattern; a comparison unit that detects a difference the analysis target pattern and the comparison target pattern; and a presenting unit that presents, as a portion of a cause of a fault, the system constituent element information indicated by the difference.2016-05-05
20160124793LOG ANALYTICS FOR PROBLEM DIAGNOSIS - In a set of problem log entries from a computing system, a subset of the set of problem log entries are identified, which pertain to a failed request. The subset is compared to a reference model which defines log entries per request type under a healthy state of the computing system, to identify a portion of the subset of problem log entries which deviate from corresponding log entries in the reference model. In the portion of the subset, at least one high-value log entry is identified. The at least one high-value log entry is output.2016-05-05
20160124794IDENTIFYING ORIGIN AND DESTINATION PAIRS - The present disclosure relates to identifying an origin/destination pair. Aspects include identifying an origin/destination pair in a service, which includes determining a current time when the current operation is executed in response to failure of a current operation for recording an origin/destination pair. Aspects also include determining a previous time when a last operation was executed for recording an origin/destination pair and identifying a missing point causing failure of the current operation based on a time interval between the current time and the previous time.2016-05-05
20160124795EVALUATION METHOD AND APPARATUS - When a program is executed, an operation unit acquires a log including information that identifies a variable whose input content has been changed from previous execution of the program and information that indicates a correspondence between program elements and their respective execution results output by execution of the program elements. When an execution result of the program is different from a corresponding previous execution result, the operation unit searches the log for a program element that has produced the execution result and a variable used in the program element and takes into account a change status of a content input to the found variable. Consequently, the operation unit determines a cause of the difference between the execution results and validity of the difference. The operation unit outputs information that indicates a location of the different execution result, the cause of the difference, and the validity of the difference.2016-05-05
20160124796METHOD AND DEVICE FOR FAULT DETECTION - The disclosure concerns a method implemented by a processing device. The method includes performing a first execution by the processing device of a computing function based on one or more initial parameters stored in a first memory device. The execution of the computing function generates one or more modified values of at least one of the initial parameters, wherein during the first execution the one or more initial parameters are read from the first memory device and the one or more modified values are stored in a second memory device. The method also includes performing a second execution by the processing device of the computing function based on the one or more initial parameters stored in the first memory device.2016-05-05
20160124797Memory Bus Error Signal - A technique includes receiving, by a device a command, wherein a response to the command is expected from the device within a predetermined response time. The device may selectively generate an error signal to allow time for the device to complete processing the command.2016-05-05
20160124798Flexible SENT Device Configuration - The present invention relates to an integrated circuit device comprising an output port for transmitting a data stream and a processor for controlling the transmission of the data stream in accordance with a single-edge nibble transmission protocol. The device also comprises a configuration means for receiving and storing configuration data. The processor is adapted for reporting a plurality of diagnostic statuses via the data stream by transmitting for each diagnostic status a corresponding diagnostic code defined by the configuration data, and wherein the processor is furthermore adapted for reporting the plurality of diagnostic statuses in a diagnostic status reporting order defined by the configuration data.2016-05-05
20160124799TARGETED CRASH FIXING - Client devices having deployed an application may experience a crash of the application. For example, a first client device may experience a first crash having a first crash signature of the application. After experiencing the first crash, a first device identification of the first client device may be assigned to a first bucket designating one or more device identifications of client devices having experienced the first crash. Client devices having device identifications in the first bucket are provided with a first crash fix for the first crash, while a second client device is not provide with the first crash fix, where the application is deployed on the second client device and the second client device was not assigned to the first bucket.2016-05-05
20160124800MICROCONTROLLER UNIT AND METHOD OF OPERATING A MICROCONTROLLER UNIT - A microcontroller unit (MCU) having a functional state, a reset state, and one or more assertable fault sources is described. Each fault source has its own fault source assertion count and its own fault source assertion limit; the MCU is arranged to perform the following sequence of operations in a cyclic manner: if one or more of the fault sources are asserted, pass from the functional state to the reset state and increase the respective fault source assertion counts by one increment; if one or more of the fault source assertion counts exceeds the respective fault source assertion limit, disable the respective fault source; and pass from the reset state to the functional state. A method of operating an MCU is also disclosed.2016-05-05
Website © 2025 Advameg, Inc.