10th week of 2009 patent applcation highlights part 82 |
Patent application number | Title | Published |
20090064106 | Reusing Components in a Running Application - Methods, systems, and apparatus, including computer program products, for reusing a component. In one aspect, a method includes executing a source application in an application environment; presenting a reusable component in a source application window corresponding to the source application, wherein the reusable component is visibly distinguishable from one or more non-reusable components displayed in the source application window; receiving input selecting the reusable component in the source application window and adding the reusable component to a target application window corresponding to a target application; and inserting one or more computer-readable instructions associated with the reusable component into the target application. Further, input can be received to activate a reuse function associated with the source application. Additionally, the reusable component can be visibly distinguishable from one or more non-reusable components displayed in the source application window only when the reuse function is active. | 2009-03-05 |
20090064107 | Token Transformation Profiles and Identity Service Mediation Node for Dynamic Identity Token - A method comprising, sending a request from a requester to a provider via a bus, the request including a provider description and an identity token of a first type, wherein the bus includes an identity service mediation node having data associating a plurality of providers descriptions with corresponding identity token types, determining with the identity service node an identity token type associated with the provider description, updating the request with data including the identity token type associated with the provider description, sending the updated request with data including the identity token type associated with the provider description to an identity service via the bus, transforming the identity token of the first type into an identity token of the type associated with the provider description with the identity service, and sending the request including the transformed identity token to the provider via the bus. | 2009-03-05 |
20090064108 | Configuring Software Stacks - The present disclosure is directed to a system and method for configuring software stacks. In some implementations, a method for configuring devices includes automatically identifying one or more applications in the software stack based, at least in part, on at least one of a plurality of identifiable device models or types. The software stack is stored in a device. The one or more applications is automatically configured for execution in the device in accordance with the identified device model. Each of the plurality of identifiable device models is associated with a different configuration of the software stack. | 2009-03-05 |
20090064109 | METHODS, SYSTEMS, AND COMPUTER PRODUCTS FOR EVALUATING ROBUSTNESS OF A LIST SCHEDULING FRAMEWORK - Systems, methods, and computer products for evaluating robustness of a list scheduling framework. Exemplary embodiments include a method for evaluating the robustness of a list scheduling framework, the method including identifying a set of compiler benchmarks known to be sensitive to an instruction scheduler, running the set of benchmarks against a heuristic under test, H and collect an execution time Exec(H[G]), where G is a directed a-cyclical graph, running the set of benchmarks against a plurality of random heuristics H | 2009-03-05 |
20090064110 | MINING LIBRARY SPECIFICATIONS USING INDUCTIVE LEARNING - A system and method for mining program specifications includes generating unit tests to exercise functions of a library through an application program interface (API), based upon an (API) signature. A response to the unit tests is determined to generate a transaction in accordance with a target behavior. The transaction is converted into a relational form, and specifications of the library are learned using an inductive logic programming tool from the relational form of the transaction. | 2009-03-05 |
20090064111 | Formal Verification of Graphical Programs - System and method for formal verification of a graphical program. A graphical program comprising a plurality of interconnected nodes is created in response to input. One or more correctness assertions regarding program state of the graphical program are specified in response to user input, and a proof obligation generated based on the graphical program and the correctness assertions, which is usable by a theorem prover to determine correctness of the graphical program. The proof obligation may be generated by compiling the graphical program to generate an object-level diagram, parsing the correctness assertions to generate an intermediate logical form of the one or more correctness assertions, and analyzing the object-level diagram, the intermediate logical form, and/or semantics of the graphical programming language in which the graphical program is written to generate the proof obligation. A theorem prover may then process the proof obligation to determine whether the graphical program is correct. | 2009-03-05 |
20090064112 | TECHNIQUE FOR ALLOCATING REGISTER TO VARIABLE FOR COMPILING - The present invention relates to allocating registers to variables in order to compile a program. In an embodiment of the present invention a compiler apparatus stores interference information indicating an interference relationship between variables, selects a register and allocates the register to each variables in accordance with a predetermined procedure, without allocating the same register to a set of variables having interference relationships. The compiler further replaces multiple variables having the same register allocated thereto with a new variable and generates an interference relationship by merging the interference relationships each concerning one of multiple variables. The compiler further updates interference information according to the generated interference relationship and allocates to each variable in the program using the new variable a register, selected in accordance with the predetermined procedure without allocating the same register to a set of variables having the interference relationships, based on the updated interference information. | 2009-03-05 |
20090064113 | METHOD AND SYSTEM FOR DYNAMIC LOOP TRANSFER BY POPULATING SPLIT VARIABLES - A method that provides for dynamic loop transfer for a method having a first set of instructions being executed by an interpreter is provided. An execution stack includes slots for storing a value of each local variable known to each subroutine while the subroutine is active. The method comprises suspending execution at a point for which a current execution state can be captured from the execution stack; assigning the value in each slot of the execution stack to a corresponding slot of an array of values; scanning the first set of instructions to identify a data type for local variable that is not known in the current execution state and shares a slot in the execution stack with a local variable that is known; and generating a second set of instructions for the method coded to be initially executed to declare each local variable that is known in the current execution state and each local variable for which a data type was identified, assign each declared variable with the value assigned to the slot in the array that corresponds to the slot of the execution stack in which the value of the variable is stored during execution of the first set of instructions, and branch to a target point in the second set of instructions that corresponds to the point at which execution was suspended. | 2009-03-05 |
20090064114 | SYSTEMS, METHODS, AND COMPUTER PRODUCTS FOR AUTOMATED INJECTION OF JAVA BYTECODE INSTRUCTIONS FOR JAVA LOAD TIME OPTIMIZATION VIA RUNTIME CHECKING WITH UPCASTS - Automated injection of Java bytecode instructions for Java load time optimization via runtime checking with upcasts. Exemplary embodiments include a method including generating a stack for each of a plurality of bytecodes, generating a subclass configured to keep a history of instructions that have modified the stack, statically scanning a plurality of Java classes associated with the plurality of bytecodes to locate class file configurations and bytecode patterns that cause loading of additional classes to complete a verification of each of the classes in the plurality of Java classes, rewriting the bytecodes to delay the loading of the additional classes until required at a runtime, recording modifications that have been made to the stack by the instructions, and applying the modifications to each of the bytecodes in the plurality of bytecodes. | 2009-03-05 |
20090064115 | Enabling graphical notation for parallel programming - In one embodiment, the present invention includes a method for developing of a parallel program by specifying graphical representations for input data objects into a parallel computation code segment, specifying graphical representations for parallel program schemes, each including at least one graphical representation of an operator to perform an operation on an data object, determining if any of the parallel program schemes include at least one alternative computation, and unrolling the corresponding parallel program schemes and generating alternative parallel program scheme fragments therefrom. Other embodiments are described and claimed. | 2009-03-05 |
20090064116 | Constructor Argument Optimization in Object Model for Folding Multiple Casts Expressions - A method and computer program product, for providing an optimization for a most derived object during compile time are provided. The optimization determines whether a most derived class object is present during a compile time. Also, the optimization utilizes the most derived class object to obtain a location of a virtual base for the most derived class object during the compile time, and provides the virtual base of the most derived class object during the compile time. The method is executed for a constructor and/or a destructor. The constructor or destructor contains arguments which require conversion to a base type, and the conversion is performed at compile-time instead of at runtime. | 2009-03-05 |
20090064117 | Device, System, and Method of Computer Program Optimization - Device, system, and method of computer program optimization. For example, an apparatus to analyze a plurality of versions of computer program includes: a code analyzer to determine one or more code differences between first and second versions of the computer program, based on at least one optimization log associated with at least one of the first and second versions of the computer program. | 2009-03-05 |
20090064118 | SOFTWARE DEOBFUSCATION SYSTEM AND METHOD - A system and method are disclosed that enable automated deobfuscation of software. A method may include identifying at least one section of target software matching trigger criteria, either by using pattern matching or behavior analysis; emulating at least a portion of the identified section; and generating deobfuscated software by substituting a simplified section for the identified section. The method may further be iterated. Emulation includes simulating the effect of certain instructions on control flow and/or memory locations, such as the program stack, a register, cache memory, heap memory, or other memory. The simplified section may comprise a number of no operation (NOP) instructions replacing, which may then be jumped for further simplification. | 2009-03-05 |
20090064119 | Systems, Methods, And Computer Products For Compiler Support For Aggressive Safe Load Speculation - Systems, methods and computer products for compiler support for aggressive safe load speculation. Exemplary embodiments include a method for aggressive safe load speculation for a compiler in a computer system, the method including building a control flow graph, identifying both countable and non-countable loops, gathering a set of candidate loops for load speculation, for each candidate loop in the set of candidate loops gathered for load speculation performing computing an estimate of the iteration count, delay cycles, and code size, performing a profitability analysis and determine an unroll factor based on the delay cycles and the code size, transforming the loop by generating a prologue loop to achieve data alignment and an unrolled main loop with loop directives, indicating which loads can safely be executed speculatively and performing low-level instruction on the generated unrolled main loop. | 2009-03-05 |
20090064120 | Method and apparatus to achieve maximum outer level parallelism of a loop - In one embodiment, the present invention includes a method for constructing a data dependency graph (DDG) for a loop to be transformed, performing statement shifting to transform the loop into a first transformed loop according to at least one of first and second algorithms, performing unimodular and echelon transformations of a selected one of the first or second transformed loops, partitioning the selected transformed loop to obtain maximum outer level parallelism (MOLP), and partitioning the selected transformed loop into multiple sub-loops. Other embodiments are described and claimed. | 2009-03-05 |
20090064121 | SYSTEMS, METHODS, AND COMPUTER PRODUCTS FOR IMPLEMENTING SHADOW VERSIONING TO IMPROVE DATA DEPENDENCE ANALYSIS FOR INSTRUCTION SCHEDULING - Systems, methods and computer products for implementing shadow versioning to improve data dependence analysis for instruction scheduling. Exemplary embodiments include a method to identify loops within the code to be compiled, for each loop a dependence initializing a matrix, for each loop shadow identifying symbols that are accessed by the loop, examining dependencies, storing, comparing and classifying the dependence vectors, generating new shadow symbols, replacing the old shadow symbols with the new shadow symbols, generating alias relationships between the newly created shadow symbols, scheduling instructions and compiling the code. | 2009-03-05 |
20090064122 | Evaluating Computer Driver Update Compliance - Evaluating computer driver update compliance including applying a hashing algorithm to the contents of a driver repository, yielding a first hash value, the driver repository containing installed drivers for a computer; dating the first hash value; storing the first hash value and the date of the first hash value; identifying a candidate update for a driver installed in the repository, the candidate update having an update date; again applying the hashing algorithm to the contents of the driver repository, yielding a second hash value; comparing the first hash value and the second hash value; if the first hash value and the second hash value match, comparing the date of the first hash value and the update date; and if the update date is later than the date of the first hash value, reporting that the candidate update has not yet been installed. | 2009-03-05 |
20090064123 | SOFTWARE UPDATE SYSTEM AND METHOD - There is provided a method, system and computer program for updating at least one component in a multi-component software application. The method includes receiving application data describing characteristics of the software application, receiving update data describing at least one update applicable to the software application and reviewing the application data and update data to determine whether the at least one update is applied to the software application. | 2009-03-05 |
20090064124 | Firmware updating system and method for update the same - A firmware updating system and a method for updating the same are provided. The firmware updating system comprises an image updating device and an embedded device. The image updating device comprises a first storage device and a merging module. The first storage device is for storing a first header and a first file system. The merging system is for merging the first header and the first file system to output a first image file. The embedded device comprises a second storage device and a self-updating module. The second storage device is for storing a second image file. The second image file includes a second header, a second file system, and a third file system. The self-updating module is for updating the second file system of the second image file as the first file system according to the first image file. | 2009-03-05 |
20090064125 | Secure Upgrade of Firmware Update in Constrained Memory - A hardware-based security module may contain executable code used to manage the electronic device in which the security module resides. Because the security module may have limited memory, a memory update process is used that allows individual blocks to be separately downloaded and verified. Verification data is sent in a header block prior to sending the individual data blocks. | 2009-03-05 |
20090064126 | Versioning compatibility - A remoting client and a remoting server are described. In one embodiment, the remoting client has a client remote access application, a client invoker, a marshaller, and an unmarhaller. The client remote access application provides a version indicator of the client remote access application and receives a version indicator of a server remote access application. The client invoker generates an invocation request including the version indicator of the client remote access application. The client remote access application determines a compatible version between the client remote access application and the server remote access application based on the version indicator of the client remote access application and the version indicator of the server remote access application. | 2009-03-05 |
20090064127 | Unattended upgrade for a network appliance - A method and apparatus for upgrading a network appliance. In one embodiment, the method includes determining that an upgrade of the network appliance is needed using versioning information of the network appliance and upgrade versioning information, and determining, based on upgrade criteria, whether the network appliance should be upgraded using a full install image. If the network appliance should be upgraded using the full install image, the full install image is downloaded to the network appliance. | 2009-03-05 |
20090064128 | SYSTEM AND METHOD FOR UPDATING DATA IN REMOTE DEVICES - A central host performs an automated method of updating multiple remote devices. In one embodiment, the host recognizes a predetermined download time and, in advance of the download time, transmits a calendar update to multiple remote devices. The calendar update includes the download time, and the remote devices may utilize the download time to set calendar reminders for entering an active state. Within a short time after reaching the download time, the host pushes download data to the remote devices by broadcasting the download data. In one aspect, the host may receive message acknowledgements from remote devices in response to a first calendar update, and the host may automatically transmit additional calendar updates to any remote devices that did not receive the first calendar update. Additional embodiments involve related methods and the terminal devices that receive the updates. | 2009-03-05 |
20090064129 | SUSPEND AND RESUME MECHANISMS ON A FLASH MEMORY - An apparatus for providing a file to a target device comprises a communication controller and a processor. The processor acquires download status indicating that a portion of the file has been successfully programmed in nonvolatile memory of the target device, determines a resuming point of the file according to the acquired download status, and transmits a portion of the file from the determined resuming point to the target device via the communication controller. The portion of file is programmed in the nonvolatile memory of the target device. | 2009-03-05 |
20090064130 | UPDATING A WORKFLOW WHEN A USER REACHES AN IMPASSE IN THE WORKFLOW - Provided are a method, system, and article of manufacture for updating a workflow when a user reaches an impasse in the workflow. A workflow program processes user input at a current node in a workflow comprised of nodes and workflow paths connecting the nodes, and wherein the user provides user input to traverse through at least one workflow path to reach the current node. The workflow program processes user input at the current node to determine whether there is a next node in the workflow for the processed user input. The workflow program transmits information on the current node to an analyzer in response to determining that there is no next node in the workflow. The analyzer processes the information on the current node to determine whether there are modifications to the current node. The analyzer transmits to the workflow program an update including the determined modifications to the current node in response to determining the modification. | 2009-03-05 |
20090064131 | POST-INSTALL CONFIGURATION FOR APPLICATIONS - Embodiments of the present teachings provide for standardized post installation configuration of a software application. For Linux-based applications, a portal service provides a Red Hat Packet Manager (“RPM”) package that includes selected software to be installed on a user's computing device, and a post install configuration file (“PIC”). A post-install configurator accesses the PIC file and performs post-installation configuration based on the contents of the PIC file. The PIC file thus provides a standardized mechanism in which software vendors can specify post-installation configuration of their applications, without having to develop their own tools or applications. | 2009-03-05 |
20090064132 | Registration process for determining compatibility with 32-bit or 64-bit software - A registration process for computers as part of a provisioning system that automatically determines the appropriate components to install in each computer system. The registration process ensures that the configuration information necessary for provisioning of software components that are appropriate to each system are collected. The registration process can identify support for 64-bit components. The registration process checks a field in the processor to determine longword, that is 64-bit support, or checks an entry in a file maintained by an operating system to determine 64-bit support. | 2009-03-05 |
20090064133 | Provisioning for 32-bit or 64-bit systems - A provisioning system to automatically determine the appropriate components to install or make available for installation on a target computer system. The provisioning system ensures the provisioning of software components that are appropriate to each target computer system without requiring user input. The provisioning system can identify support for 64-bit software components. The provisioning system checks a field in the processor to determine longword, that is 64-bit support, or checks an entry in a file maintained by an operating system to determine 64 bit support. If 64-bit support is not detected then a 32-bit component is installed to ensure that the target computer system is capable of executing the software component. | 2009-03-05 |
20090064134 | SYSTEMS AND METHODS FOR CREATING AND EXECUTING FILES - The invention generally relates to systems and methods for creating and executing files. In one embodiment, the file includes an executable program, a parameter for use by the executable program in subsequent operation, and an identifier that includes at least a first attribute computed from the executable program but not from the parameter. The first attribute is for facilitating subsequent detection of changes to the executable program but not for facilitating detection of changes to the parameter. | 2009-03-05 |
20090064135 | Bootstrapper and software download manager - The present invention provides a bootstrapper and download manager for handling the download and installation of one or more software products to a computer. The invention determines system requirements and whether any prerequisite software is required by a software product to be downloaded. Any necessary prerequisite software is installed on the computer and if more than one software product has a shared prerequisite, then the invention recognizes that and prevents downloading multiple ones of the shared prerequisites. Also, in the event of an interruption or error during download, the invention can resume downloading or installation based on the download successfully stored on the local machine without requiring the download all over again. This saves considerable time during the download and install process and enhances user productivity and experience. A download manager provides a user interface to efficiently select from multiple software products for download and negotiate issues such as multiple and different product licenses. | 2009-03-05 |
20090064136 | UTILIZING SYSTEM CONFIGURATION INFORMATION TO DETERMINE A DATA MIGRATION ORDER - Methods, systems and computer program products for utilizing system configuration information to determine a data migration order. The method includes computer instructions for establishing communication from a source virtual machine to a target virtual machine, the source virtual machine including a memory. The configuration information associated with the source virtual machine is determined and utilized to determine an order of migration for pages in the memory. The pages in the memory are transmitted to the target virtual machine in the order of migration. | 2009-03-05 |
20090064137 | Method and Apparatus for Determining a Service Cluster Topology Based on Static Analysis - The service assignment tool analyzes a service to determine whether the service can execute on a cluster. If the service cannot execute on a cluster, the service is assigned to a single virtual machine. The service assignment tool identifies non-cluster friendly services by performing a static analysis on the bytecode of the service. The bytecode of the service is analyzed by comparing each segment of bytecode to a list of known good and bad coding conventions. If each segment of bytecode in a service meets the good coding convention criteria, then the service is cluster friendly. If one segment of bytecode does not meet the good coding convention criteria, then the entire service is considered to be not cluster friendly. | 2009-03-05 |
20090064138 | APPARATUS, SYSTEM, AND METHOD FOR GATHERING TRANSACTION STATISTICS DATA FOR DELIVERING TRANSACTION STATISTICS DATA AS JAVA OBJECTS VIA JMX NOTIFICATION - An apparatus, system, and method are disclosed for gathering transaction statistics data for a real time transaction system that delegates data persistence of the transaction statistics data to one or more clients by utilizing a JMX notification system. Specifically, transaction statistics data is collected in real time for each transaction executed by a real time transaction system; a JMX notification is generated that includes the transaction statistics data; and an MBean server broadcasts the JMX notification object to listeners, which then capture and persist the transaction statistics data on the client side, thereby minimizing the impact of data logging on the transaction system. | 2009-03-05 |
20090064139 | Method for Data Processing Using a Multi-Tiered Full-Graph Interconnect Architecture - A method is provided for implementing a multi-tiered full-graph interconnect architecture. In order to implement a multi-tiered full-graph interconnect architecture, a plurality of processors are coupled to one another to create a plurality of processor books. The plurality of processor books are coupled together to create a plurality of supernodes. Then, the plurality of supernodes are coupled together to create the multi-tiered full-graph interconnect architecture. Data is then transmitted from one processor to another within the multi-tiered full-graph interconnect architecture based on an addressing scheme that specifies at least a supernode and a processor book associated with a target processor to which the data is to be transmitted. | 2009-03-05 |
20090064140 | System and Method for Providing a Fully Non-Blocking Switch in a Supernode of a Multi-Tiered Full-Graph Interconnect Architecture - A method, computer program product, and system are provided for transmitting data from a first processor of a data processing system to a second processor of the data processing system. In one or more switches, a set of virtual channels is created, the one or more switches comprising, for each processor, a corresponding switch in the one or more switches. The data is transmitted from the first processor to the second processor through a path comprising a subset of processors of a set of processors in the data processing system. In each processor of the subset of processors, the data is stored in a virtual channel of a corresponding switch before transmitting the data to a next processor. The virtual channel of the corresponding switch in which the data is stored corresponds to a position of the processor in the path through which the data is transmitted. | 2009-03-05 |
20090064141 | EFFICIENT UTILIZATION OF TRANSACTIONS IN COMPUTING TASKS - A method of performing a computing transaction is disclosed. In one disclosed embodiment, during performance of a transaction, if an operation in a transaction can currently be performed, then a result for the operation is received from a transaction system. On the other hand, if the operation in the transaction cannot currently be performed, then a message indicating that the operation would fail is received from the transaction system. The transaction ends after receiving for each operation in the transaction a result or a message indicating that the operation would fail. | 2009-03-05 |
20090064142 | INTELLIGENT RETRY METHOD USING REMOTE SHELL - Method for issuing and monitoring a remote batch job, method for processing a batch job, and system for processing a remote batch job. The method for issuing and monitoring a remote batch job includes formatting a command to be sent to a remote server to include a sequence identification composed of an issuing server identification and a time stamp, forwarding the command from the issuing server to the remote server for processing, and determining success or failure of the processing of the command at the remote server. When the failure of the processing of the command at the remote server is determined, the method further includes instructing the remote server to retry the command processing. | 2009-03-05 |
20090064143 | Subscribing to Progress Indicator Treshold - Methods and apparatus, including computer program products, implementing and using techniques for providing a notification to a user about the progress of a task running on a digital processing device. A user input identifying a progress indicator for the task running on the digital processing device is received. A user input selecting a threshold value is received. The threshold value indicates a point on the progress indicator at which the user is to be notified about the progress of the task. A notification is provided to the user when the threshold value is reached. | 2009-03-05 |
20090064144 | Community boundaries in a geo-spatial environment - A method and system of community boundaries in a geo-spatial environment are disclosed. In one embodiment, a method of organizing a community network includes obtaining a location on a geo-spatial map, determining a representative in the community network associated with the location, obtaining a community boundary selection associated with a community from the representative, determining a region corresponding to the community boundary selection on the geo-spatial map, and creating a community boundary associated with the community on the geo-spatial map from the community boundary selection. The method may further include determining a residence of a member of the community network in the region, and associating the member with the community based on the residence. The method may also include obtaining a privacy preference corresponding to the community, and hiding a profile associated with the member from a public view of the community network based on the privacy preference. | 2009-03-05 |
20090064145 | Computer System and Method for Activating Basic Program Therein - A computer system capable of executing a basic program for providing a program execution environment. The system has a storage device for storing data that is necessary to the basic program during startup, and, for each basic program, configuration data that indicates information relating to data necessary during startup. In the computer system, data relating to the basic program that is to be started is read from the storage device setting, data necessary during startup is acquired from the storage device on the basis of information written in the configuration data, the data necessary during startup is stored in memory space that is in the memory device and that can be accessed from the basic program that is to be started, and a process for starting the designated basic program is executed. | 2009-03-05 |
20090064146 | INSTRUCTION GENERATING APPARATUS, DOCUMENT PROCESSING SYSTEM AND COMPUTER READABLE MEDIUM - An instruction generating apparatus includes a receiving section and a generating section. The receiving section receives job information including a plurality of jobs determined in a given order. Each job includes a process of a document by a processing device. The generating section generates instruction information based on the job information received by the receiving section. The instruction information includes, in the given order, a plurality of sets of (i) document corresponding to each job and (ii) detailed process of the document so as to instruct the processing device to perform each document process. | 2009-03-05 |
20090064147 | TRANSACTION AGGREGATION TO INCREASE TRANSACTION PROCESSING THROUGHOUT - Provided are techniques for increasing transaction processing throughput. A transaction item with a message identifier and a session identifier is obtained. The transaction item is added to an earliest aggregated transaction in a list of aggregated transactions in which no other transaction item as the same session identifier. A first aggregated transaction in the list of aggregated transactions that has met execution criteria is executed. In response to determining that the aggregated transaction is not committing, the aggregated transaction is broken up into multiple smaller aggregated transactions and a target size of each aggregated transaction is adjusted based on measurements of system throughput. | 2009-03-05 |
20090064148 | Linking Transactions with Separate Systems - Methods and apparatuses enable linking stateful transactions with multiple separate systems. The first and second stateful transactions are associated with a transaction identifier. Real time data from each of the multiple systems is concurrently presented within a single operation context to provide a transparent user experience. Context data may be passed from one system to another to provide a context in which operations in the separate systems can be linked. | 2009-03-05 |
20090064149 | Latency coverage and adoption to multiprocessor test generator template creation - A multi-core multi-node processor system has a plurality of multiprocessor nodes, each including a plurality of microprocessor cores. The plurality of microprocessor nodes and cores are connected and form a transactional communication network. The multi-core multi-node processor system has further one or more buffer units collecting transaction data relating to transactions sent from one core to another core. An agent is included which calculates latency data from the collected transaction data, processes the calculated latency data to gather transaction latency coverage data, and creates random test generator templates from the gathered transaction latency coverage data. The transaction latency coverage data indicates at least the latencies of the transactions detected during collection of the transaction data having a pre-determined latency, and includes, for example, four components for transaction type latency, transaction sequence latency, transaction overlap latency, and packet distance latency. Thus, random test generator templates may be created using latency coverage. | 2009-03-05 |
20090064150 | Process Manager - A process manager ( | 2009-03-05 |
20090064151 | METHOD FOR INTEGRATING JOB EXECUTION SCHEDULING, DATA TRANSFER AND DATA REPLICATION IN DISTRIBUTED GRIDS - Scheduling of job execution, data transfers, and data replications in a distributed grid topology are integrated. Requests for job execution for a batch of jobs are received, along with a set of job requirements. The set of job requirements includes data objects needed for executing the jobs, computing resources needed for executing the jobs, and quality of service expectations. Execution sites are identified within the grid for executing the jobs based on the job requirements. Data transfers needed for providing the data objects for executing the batch of jobs are determined, and data for replication is identified. A set of end-points is identified in the distributed grid topology for use in data replication and data transfers. A schedule is generated for data transfer, data replication and job execution in the grid in accordance with global objectives. | 2009-03-05 |
20090064152 | SYSTEMS, METHODS AND COMPUTER PRODUCTS FOR CROSS-THREAD SCHEDULING - Systems, methods and computer products for cross-thread scheduling. Exemplary embodiments include a cross thread scheduling method for compiling code, the method including scheduling a scheduling unit with a scheduler sub-operation in response to the scheduling unit being in a non-multithreaded part of the code and scheduling the scheduling unit with a cross-thread scheduler sub-operation in response to the scheduling unit being in a multithreaded part of the code. | 2009-03-05 |
20090064153 | COMMAND SELECTION METHOD AND ITS APPARATUS, COMMAND THROW METHOD AND ITS APPARATUS - When selecting one command within a processor from a plurality of command queues vested with order of priority, the order of priority assigned to the plurality of command queues is dynamically changed so as to select a command, on a priority basis, from a command queue vested with a higher priority from among the plurality of command queues in accordance with the post-change order of priority. | 2009-03-05 |
20090064154 | IMAGE RECONSTRUCTION SYSTEM WITH MULTIPLE PARALLEL RECONSTRUCTION PIPELINES - In a method, system, computer-readable medium and watchdog module to control a number of medical technology processes that are executed in multiple computerized pipelines according to a predetermined organizational structure, a priority is associated with an incoming process, with a high priority and multiple low priorities being provided. A process with a high priority is executed in a priority pipeline among the multiple pipelines. | 2009-03-05 |
20090064155 | TASK MANAGER AND METHOD FOR MANAGING TASKS OF AN INFORMATION SYSTEM - Information about a device may be emotively conveyed to a user of the device. Input indicative of an operating state of the device may be received. The input may be transformed into data representing a simulated emotional state. Data representing an avatar that expresses the simulated emotional state may be generated and displayed. A query from the user regarding the simulated emotional state expressed by the avatar may be received. The query may be responded to. | 2009-03-05 |
20090064156 | COMPUTER PROGRAM PRODUCT AND METHOD FOR CAPACITY SIZING VIRTUALIZED ENVIRONMENTS - A computer system determines an optimal hardware system environment for a given set of workloads by allocating functionality from each workload to logical partitions, where each logical partition includes resource demands, assigning a priority weight factor to each resource demand, configuring potential hardware system environments, where each potential hardware system environment provides resource capacities, and computing a weighted sum of least squares metric for each potential hardware system environment. | 2009-03-05 |
20090064157 | ASYNCHRONOUS DATA STRUCTURE PULL APPLICATION PROGRAMMING INTERFACE (API) FOR STREAM SYSTEMS - Provided are techniques for processing data items. A limit on the number of dequeue operations allowed in a current step of processing for a queue-like data structure is set, wherein the number of allowed dequeue operations limit at least one of an amount of CPU resources and an amount of memory resources to be used by an operator. The operator to perform processing is selected and the operator is activated by passing control to the operator, which then dequeues data constrained by the limits set. In response to receiving control back from the operator, the data structure size is examined to determine whether the operator made forward progress in that the operator enqueued or dequeued at least one data item. | 2009-03-05 |
20090064158 | MULTI-CORE RESOURCE UTILIZATION PLANNING - Techniques for multi-core resource utilization planning are provided. An agent is deployed on each core of a multi-core machine. The agents cooperate to perform one or more tests. The tests result in measurements for performance and thermal characteristics of each core and each communication fabric between the cores. The measurements are organized in a resource utilization map and the map is used to make decisions regarding core assignments for resources. | 2009-03-05 |
20090064159 | SYSTEM AND METHOD FOR OPTIMIZING LOAD DISTRIBUTION ACROSS LOGICAL AND PHYSICAL RESOURCES IN A STORAGE SYSTEM - An apparatus, system and method to optimize load distribution across logical and physical resources in a storage system. An apparatus in accordance with the invention may include an availability module and an allocation module. The availability module may dynamically assign values to resources in a hierarchical tree structure. Each value may correspond to an availability parameter such as allocated volumes, current resource utilization, and historic resource utilization. The allocation module may serially process the values and allocate a load to a least busy resource in the hierarchical tree structure based on the assigned values. | 2009-03-05 |
20090064160 | Transparent lazy maintenance of indexes and materialized views - Described herein is a materialized view or index maintenance system that includes a task generator component that receives an indication that an update transaction has committed against a base table in a database system. The task generator component, in response to the update transaction being received, generates a maintenance task for one or more of a materialized view or an index that is affected by the update transaction. A maintenance component transparently performs the maintenance task when a workload of a CPU in the database system is below a threshold or when an indication is received that a query that uses the one or more of the materialized view or the index has been received. | 2009-03-05 |
20090064161 | DEVICE ALLOCATION UTILIZING JOB INFORMATION, STORAGE SYSTEM WITH A SPIN CONTROL FUNCTION, AND COMPUTER THEREOF - This invention provides a storage system coupled to a computer that executes data processing jobs by running a program, comprising: an interface; a storage controller; and disk drives. The storage controller is configured to: control spinning of disk in the disk drives; receive job information which contains an execution order of the job and a load attribute of the job from the computer before the job is executed; select a logical volume to which none of the storage areas are allocated when requested by the computer to provide a logical volume for storing a file that is used temporarily by the job to be executed; select which storage area to allocate to the selected logical volume based on at least one of the job execution order and the job load attribute; allocate the selected storage area to the selected logical volume; and notify the computer of the selected logical volume. | 2009-03-05 |
20090064162 | RESOURCE TRACKING METHOD AND APPARATUS - The present invention is directed to a parallel processing infrastructure, which enables the robust design of task scheduler(s) and communication primitive(s). This is achieved, in one embodiment of the present invention, by decomposing the general problem of exploiting parallelism into three parts. First, an infrastructure is provided to track resources. Second, a method is offered by which to expose the tracking of the aforementioned resources to task scheduler(s) and communication primitive(s). Third, a method is established by which task scheduler(s) in turn may enable and/or disable communication primitive(s). In this manner, an improved parallel processing infrastructure is provided. | 2009-03-05 |
20090064163 | Mechanisms for Creation/Deletion of Linear Block Address Table Entries for Direct I/O - The present invention provides mechanisms that enable application instances to pass block mode storage requests directly to a physical I/O adapter without run-time involvement from the local operating system or hypervisor. In one aspect of the present invention, a mechanism is provided for handling user space creation and deletion operations for creating and deleting allocations of linear block addresses of a physical storage device to application instances. For creation, it is determined if there are sufficient available resources for creation of the allocation. For deletion, it is determined if there are any I/O transactions active on the allocation before performing the deletion. Allocation may be performed only if there are sufficient available resources and deletion may be performed only if there are no active I/O transactions on the allocation being deleted. | 2009-03-05 |
20090064164 | METHOD OF VIRTUALIZATION AND OS-LEVEL THERMAL MANAGEMENT AND MULTITHREADED PROCESSOR WITH VIRTUALIZATION AND OS-LEVEL THERMAL MANAGEMENT - A program product and method of managing task execution on an integrated circuit chip such as a chip-level multiprocessor (CMP) with Simultaneous MultiThreading (SMT). Multiple chip operating units or cores have chip sensors (temperature sensors or counters) for monitoring temperature in units. Task execution is monitored for hot tasks and especially for hotspots. Task execution is balanced, thermally, to minimize hot spots. Thermal balancing may include Simultaneous MultiThreading (SMT) heat balancing, chip-level multiprocessors (CMP) heat balancing, deferring execution of identified hot tasks, migrating identified hot tasks from a current core to a colder core, User-specified Core-hopping, and SMT hardware threading. | 2009-03-05 |
20090064165 | Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks - A method for providing hardware based dynamic load balancing of message passing interface (MPI) tasks are provided. Mechanisms for adjusting the balance of processing workloads of the processors executing tasks of an MPI job are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors. | 2009-03-05 |
20090064166 | System and Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks - A system and method for providing hardware based dynamic load balancing of message passing interface (MPI) tasks are provided. Mechanisms for adjusting the balance of processing workloads of the processors executing tasks of an MPI job are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors. | 2009-03-05 |
20090064167 | System and Method for Performing Setup Operations for Receiving Different Amounts of Data While Processors are Performing Message Passing Interface Tasks - A system and method are provided for performing setup operations for receiving a different amount of data while processors are performing message passing interface (MPI) tasks. Mechanisms for adjusting the balance of processing workloads of the processors are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. An MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, setup operations may be performed while processors are performing MPI tasks to prepare for receiving different sized portions of data in a subsequent computation cycle based on the history. | 2009-03-05 |
20090064168 | System and Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks By Modifying Tasks - A system and method are provided for providing hardware based dynamic load balancing of message passing interface (MPI) tasks by modifying tasks. Mechanisms for adjusting the balance of processing workloads of the processors executing tasks of an MPI job are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. Thus, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors. | 2009-03-05 |
20090064169 | System and Method for Sensor Scheduling - A system for sensor scheduling includes a plurality of sensors operable to perform one or more tasks and a processor operable to receive one or more missions and one or more environmental conditions associated with a respective mission. Each mission may include one or more tasks to be performed by one or more of the plurality of sensors. The processor is further operable to select one or more of the plurality of sensors to perform a respective task associated with the respective mission. The processor may also schedule the respective task to be performed by the selected one or more sensors. The scheduling is based at least on a task value that is determined based on an options pricing model. The options pricing model is based at least on the importance of the respective task to the success of the respective mission and one or more scheduling demands. | 2009-03-05 |
20090064170 | COMMUNICATION APPARATUS AND METHOD FOR CONTROLLING COMMUNICATION APPARATUS - A communication apparatus includes a control unit including a controller configured to control the communication apparatus, a first communication unit configured to perform communication under control of the controller, and a second communication unit including a subcontrol unit and configured to perform communication under control of the subcontrol unit, wherein a load condition of the controller is determined, and one of the first communication unit and the second communication unit is selected to perform communication processing based on the determined load condition. | 2009-03-05 |
20090064171 | UPDATING WORKFLOW NODES IN A WORKFLOW - Provided a method, system, and article of manufacture for updating workflow nodes in a workflow. A workflow program processes user input at one node in a workflow comprised of nodes and workflow paths connecting the nodes, wherein the user provides user input to traverse through at least one workflow path to reach the current node. The workflow program transmits information on a current node to an analyzer. The analyzer processes the information on the current node to determine whether there are modifications to at least one subsequent node following the current node over at least one workflow path from the current node. The analyzer transmits to the workflow program an update including modifications to the at least one subsequent node in response to determining the modifications. | 2009-03-05 |
20090064172 | SYSTEM AND METHOD FOR TASK SCHEDULING - A computer-based method for task scheduling is disclosed. The method includes: scheduling one or more scheduled tasks, creating a scheduled task list which contains the one or more scheduled tasks, reading parameters of each of the scheduled tasks, comparing the current tasks in the memory with the scheduled tasks in the scheduled task list according to the unique task IDs if the memory contains current tasks, adding the scheduled tasks that are present in the scheduled task lists and not in the memory into the memory, and removing the current tasks that are present in the memory and not present in the scheduled task lists according to the comparison. | 2009-03-05 |
20090064173 | CONTENT MANAGEMENT - Method for receiving tailored pages, providing tailored pages and apparatus therefore. By way of illustrating the method for receiving tailored pages within a browser running on a client device, comprises the steps of: i) browsing, in the browser, pages from a page server; ii) sending from an active page in the browser to a monitoring server, at least one monitoring message including information concerning at least one of: interactions with and performance of at least one page browsed within the browser running on the respective client device; iii) receiving in the active page, from the monitoring server, a control message including an instruction to generate a cookie within the browser including selected monitoring information; iv) generating said cookie within the browser; v) sending a message to the page server, which message includes said cookie including the respective selected monitoring information; and vi) receiving from the page server, at least one page content item selected in dependence on the selected monitoring information included in the cookie. | 2009-03-05 |
20090064174 | CONFIGURABLE DYNAMIC AUDIT LOGGER - Exemplary embodiments of the present invention comprise a method for the real-time configuration of requirements for the auditing of message log data. The method comprises identifying at least one message entry field within a message, wherein the message entry field comprises message information, creating a message entry map, the message entry map comprising instructions for the mapping of information from the identified message entry fields comprised within a message to a target audit record message, and utilizing the message entry map to configure a mapping engine to map the information from the identified message entry fields comprised within a message to a target audit record message. The method further comprises retrieving a message from an Enterprise Service Bus, extracting the information from the identified message entry fields comprised within the message, and writing the extracted message information to an audit record message. | 2009-03-05 |
20090064175 | EFFICIENT MARSHALLING BETWEEN SOAP AND BUSINESS-PROCESS MESSAGES - A business process adapter converts a SOAP (Simple Object Access Protocol) message into a business process message. A body path and a node encoding type are defined for the business process adapter. And, when the SOAP message is received, the business process adaptor extracts the node of the SOAP message at the location defined by the body path and encodes the node according to the defined node encoding type. Additionally, the business process adapter converts a business process message into a SOAP message using a defined content encoding type and a defined format of the SOAP message. When the business process message is received from the business process management server, the business process adapter encodes the body of the business process message according to the defined content encoding type and generates the SOAP message from the encoded body according to the defined format. | 2009-03-05 |
20090064176 | Handling potential deadlocks and correctness problems of reduce operations in parallel systems - In one embodiment, the present invention includes a method for executing a first reduction operation on data in an input buffer, executing a second reduction operation on the data, where the second reduction operation has a higher reliability than the first reduction operation, and comparing the first and second results. Other embodiments are described and claimed. | 2009-03-05 |
20090064177 | METHOD FOR DATA DELIVERY IN A NETWORK - The present invention relates to a method of delivering data from a sender application to at least one receiver application that are arranged in a protocol stack comprising: underlying the sender application a sender messaging layer and a sender transport layer, and underlying the receiver application, a receiver messaging layer and a receiver transport layer, wherein the sender transport layer and the receiver transport layer are coupled by way of a network layer, the method comprising the steps of: incorporating a sender intermediate layer between the sender messaging layer and the sender transport layer and a receiver intermediate layer between the receiver messaging layer and the receiver transport layer; configuring the interface characteristics of the intermediate layers to be the same as for their corresponding transport layers; creating a sender queue in a non-volatile data storage component of the sender intermediate layer and a receiver queue in a non-volatile data storage component of the receiver intermediate layer, storing the data to be sent from the sender application to the receiver application in the sender queue, and transmitting the data stored in the sender queue to the receiver queue via the sender transport layer and the receiver transport layer. | 2009-03-05 |
20090064178 | MULTIPLE, COOPERATING OPERATING SYSTEMS (OS) PLATFORM SYSTEM AND METHOD - A multiple, cooperating operating systems (OS) platform system with multi processors. Multiple operating systems, each of which may be of a different type or nature, can run on different partitions of the multi-processor platform and yet coexist and cooperate. A real time operating system (RTOS) executing on a processor can communicate with another OS executing on another processor via a portion of memory accessible by the RTOS and the OS by perform read and write operations. | 2009-03-05 |
20090064179 | METHOD AND SYSTEM FOR FLEXIBLE AND NEGOTIABLE EXCHANGE OF LINK LAYER FUNCTIONAL PARAMETERS - A proposal is discussed that facilitates exchanging parameters for a link layer that allows a variable number of parameters without changing a communication protocol. Likewise, the proposal allows for both components connected via the link to negotiate values for the parameters that are exchanged without a need for external agent intervention or redundancy. | 2009-03-05 |
20090064180 | INTERPROCESSOR COMMUNICATION PROTOCOL - An InterProcessor Communication (IPC) Protocol network ( | 2009-03-05 |
20090064181 | UNOBTRUSIVE PORT AND PROTOCOL SHARING AMONG SERVER PROCESSES - A method for augmenting a hierarchy of layered applications and corresponding protocols can include applying a discrimination algorithm to a selection process in which a particular application/protocol layer in a listing of adjacent application/protocol layers is selected to receive traffic flowing through the hierarchy. A new application/protocol layer is inserted adjacent to the particular application/protocol layer in the hierarchy. Also, a new application/protocol layer is added to the listing, and the discrimination algorithm is replaced with another discrimination algorithm programmed to consider the new application/protocol layer during the selection process. Each of the steps of performing the inserting, adding and replacing steps are performed without decoupling or disabling other applications and protocols in the hierarchy. | 2009-03-05 |
20090064182 | Systems and/or methods for providing feature-rich proprietary and standards-based triggers via a trigger subsystem - The example embodiments disclosed herein relate to application integration techniques and, more particularly, to application integration techniques built around the publish-and-subscribe model (or one of its variants). In certain example embodiments, triggers are provided for establishing subscriptions to publishable document types and for specifying the services that will process documents received by the subscription. A standards-based messaging protocol (e.g., JMS messaging) may be fully embedded as a peer to a proprietary messaging protocol provided to an integration server's trigger subsystem so that all or substantially all of the feature-rich capabilities available via the proprietary protocol may also become available via the standards-based messaging protocol. The triggers may be JMS triggers in certain example embodiments. | 2009-03-05 |
20090064183 | Secure Inter-Module Communication Mechanism - Methods, apparatuses, and systems directed to facilitating secure, structured interactions between code modules executing within the context of a document processed by a user agent, such as a browser client, that implements a domain security model. In a particular implementation, a module connector script or object loaded into a base document discovers listener modules and sender modules corresponding to different origins or domains, and passes information between them. In this manner, a listener module may consume and use information from a sender module located on the same page simply by having an end-user add both modules to a web page without having to explicitly define any form of interconnection. For example, a photo module may access a user account at a remote photo sharing site, and provide one or more photos to a module that renders the photographs in a slide show. | 2009-03-05 |
20090064184 | PRE-POPULATION OF META DATA CACHE FOR RESOLUTION OF DATA MARSHALING ISSUES - In a data processing system, objects (in the object oriented sense of the word) are instantiated through the use of transmitted data which is marshaled and demarshaled through the use of protocols that acquire meta data for the transmitted data through the use of an already existing cache of such meta data which has proper content meeting version requirements as specified by an implementation key associated with the object. This eliminates the need for call back requests that may or may not succeed because of the presence of a firewall in a yet-to-be-established connection. A tool is provided for structuring the data, first on disk and then later in a more readily available portion of an active memory. | 2009-03-05 |
20090064185 | High-Performance XML Processing in a Common Event Infrastructure - Delegation of processing functions to specialized appliances in an enterprise is provided. An appliance typically comprises a combination of hardware and resident firmware that addresses needs in a computing environment, such as by providing common message transformation, integration, security, filtering and other functions. Delegation is carried out by specifying at least one XML function for front-process offloading from a server to a corresponding appliance configured to receive messages pushed towards the server, communicating management directives to the appliance for configuring the appliance to perform the specified XML function(s) according to specific requirements dynamically specified by the server and communicating instructions to the appliance so that the appliance augments received event messages with intermediate processing information based upon the front-process offloading, as received event messages pass through the appliance. | 2009-03-05 |
20090064186 | MOBILE DEVICE WITH TWO OPERATING SYSTEMS AND METHOD FOR SHARING HARDWARE DEVICE BETWEEN TWO OPERATING SYSTEMS THEREOF - A mobile device and a method for sharing a hardware device thereof are provided. Two operation systems are executed on the present mobile device simultaneously, and an embedded controller is configured to communicate among the two operation systems and a shared hardware device of the mobile device. When one of the operation systems encodes an operating command into a uniform message and transmits the uniform message to the embedded controller, the uniform message is decoded into the operating command by the embedded controller such that the hardware device operates according to the decoded operating command. On the other hand, when the embedded controller receives input data from the hardware device, the embedded controller encodes the input data into the uniform message and transmits the uniform message to one of the operation systems. The operation system receiving the uniform message decodes the uniform message into the input data. | 2009-03-05 |
20090064187 | SYSTEM AND METHOD FOR DATA MANAGEMENT OF EMBEDDED SYSTEMS - A system, method, and computer program for managing embedded component information for a product design in a PLM environment, comprising displaying at least one message object; and associating said at least one message object with a signal object, and appropriate means and computer-readable instructions. | 2009-03-05 |
20090064188 | SYSTEM AND METHOD OF VERIFYING A VIDEO BLACKOUT EVENT - A method of verifying a blackout event is disclosed that includes receiving blackout event data from an event log database of a video distribution platform at a subscriber event transmission interface (SETI) communicating with an electronic data warehouse (EDW) system. The method also includes creating at least one EDW load-ready file that includes at least a portion of the blackout event data. | 2009-03-05 |
20090064189 | ONTOLOGY DRIVEN CONTEXTUAL MEDIATION - A method for ontologically driving context mediation in a computing system can include collecting events arising from a solution in a computing environment, loading operational meta-data for the solution, contextually mediating, for example context interchange (COIN) mediating, the collected events with the operational meta-data to produce context sensitive events, and correlating the context sensitive events with corresponding symptoms in a display to an end user in the computing environment. | 2009-03-05 |
20090064190 | TECHNIQUES FOR RECEIVING EVENT INFORMATION - Techniques involving the reception of information regarding scheduled events are disclosed. For example, an apparatus may include an event management module and a communications interface module. The event management module creates an event object corresponding to an event. The event object may include a desired status information indicator. Based on this indicator, the communications interface module receives the desired status information from a remote device. | 2009-03-05 |
20090064191 | METHODS AND SYSTEMS FOR A RICH INTERNET BUS - An embodiment relates generally to a method of updating data. The method includes providing for a plurality of components, where each component is associated with a respective web page. The method also includes providing for a subset of components from the plurality of components, where the subset of components subscribes to an event. The method further includes publishing a notification message in response to the event occurring and retrieving the event by the subset of components. | 2009-03-05 |
20090064192 | Managing Collections of Appliances - A protocol to enable management of opaque entities in a computing environment comprises an events component and a commands component. The events component enables a manager to utilize a received event communicated by a corresponding managed entity to indicate when administration or other management actions have occurred to domain information on the corresponding managed entity. The commands component interacts with the managed entities in response to the events component receiving corresponding events there from. The commands component further comprises commands for backing up the domain information stored by the managed entities as opaque configuration objects, for restoring the domain information to the managed entities as opaque configuration objects and for querying an identified one of the plurality of managed entities to determine whether two domain configurations are semantically different in a way that allows the configuration to remain opaque to the manager. | 2009-03-05 |
20090064193 | Distributed Network Processing System including Selective Event Logging - Systems for selective logging events in a network. In particular implementations, a method includes receiving indications of events associated with a network application; selectively flagging one or more of the events for logging; and applying the events to a processing stream comprising a plurality of process modules. The process modules are operative to receive events from another process module; apply one or more operations in response to the received events; and conditionally transmit one or more log messages identifying flagged events to a log data store. | 2009-03-05 |
20090064194 | Event driven sendfile - A file transfer manager for managing file transfers using the sendfile operation. The sendfile operation is optimized to minimize system resources necessary to complete the file transfer. The sendfile decreases resources required during idle times by sharing a thread with other idle sendfile operations. The sendfile operation is then assigned a worker thread when further data is ready to be transfered. | 2009-03-05 |
20090064195 | METHOD FOR SYNCHRONIZING INFORMATION OF DUAL OPERATING SYSTEMS - A method for synchronizing information of dual operating systems is provided. The method is used for synchronizing information of a first operating system and a second operating system when an electronic device is switching from a first operating system to a second operating system. First, the second operating system sends an information requesting message to a controller of the electronic device when the first operating system is switched to the second operating system. The controller checks if the first operating system operates in a work mode. If the first operating system operates in the work mode, the controller forwards the information requesting message to the first operating system, so as to obtain the information of the first operating system. Finally, the second system synchronizes the information recorded therein according to the obtained information. | 2009-03-05 |
20090064196 | MODEL BASED DEVICE DRIVER CODE GENERATION - A driver model is generated that describes the configuration of one or more driver objects. The driver model and developer driver code are compiled to generate a driver including a machine readable driver model and compiled developer driver code, wherein the machine readable driver model and the complied developer driver code are independently serviceable. | 2009-03-05 |
20090064197 | Driver installer usable in plural environments - An executable file can be constructed that contains different driver installer code for use in different environments. A first executable file contains first program code that performs driver installation operations in a first environment, and that also checks to determine which environment the first program code is running in. If the first program code is running in the first environment, then the driver installation operations proceed using the first program code. If the first program code is running in a second environment, then second program code, which performs the driver installation operations in a second environment, is extracted from a resource in the first executable file. The second program code is copied into a second executable file. The second executable file is then invoked to perform the driver installation operations in the second environment. | 2009-03-05 |
20090064198 | APPARATUS, SYSTEM, AND COMPUTER PROGRAM PRODUCT FOR PROCESSING INFORMATION - An image processing apparatus includes an input unit that inputs a command, a wireless interface unit that wirelessly exchanges data with an image forming apparatus in a direct manner, a transmission/reception control unit that controls an operation of the wireless interface unit, and a program processing unit that performs an installation process of a program. The transmission/reception control unit causes the wireless interface unit to receive a driver module from the image forming apparatus. The program processing unit performs an installation process of the driver module received by the wireless interface unit. | 2009-03-05 |
20090064199 | HETEROGENEOUS ARCHITECTURE IN POOLING MANAGEMENT - A method, system, and computer program product for managing a heterogeneous connection pooling structure. The heterogeneous architecture of pooling management comprises connections having different connection attributes (i.e. different data source properties) that can share a same connection pool (i.e. same connection pool data source). An application requests a connection from data source having a specified data source property. An application server searches a pool module for an available cached connection. If a cached connection is available, the cached connection is automatically selected as a returned connection. A connection reuse protocol and a statement reuse protocol is determined and invoked to reconfigure the cached connection for reuse as a connection between the application and a database server. | 2009-03-05 |
20090064200 | Centralized Enhancement Service - A supply chain management program to enable a customer to create new screens or modified screens in the supply chain management interface. The system includes a centralized enhancement mechanism allowing the supply chain management to detect enhancements to screens and the associated transactions. The customer implements a screen enhancement interface and a template for each screen enhancement. Upon activation of a transaction the system and method automatically detect the enhancements and generate the enhanced screen. The method and system also provide processes for updating the screens and database through the enhanced screen. | 2009-03-05 |
20090064201 | Image Forming Apparatus, Application Management Method, and Computer-Readable Recording Medium Having Application Management Program - An image forming apparatus includes an output order control unit, wherein an application executed in the image forming apparatus is constructed by connection of a first software unit executing at least a process related to input of image data and a plurality of second software units executing a process related to output of the image data, and the output order control unit determines an execution order of the plural second software units, based on output order information indicating the execution order of the plural second software units. | 2009-03-05 |
20090064202 | SUPPORT LAYER FOR ENABLING SAME ACCESSORY SUPPORT ACROSS MULTIPLE PLATFORMS - A support layer for enabling same accessory support across multiple platforms may be provided for supporting interactions of applications, operating systems, and accessory devices. Cross-platform use and cross-accessory use can be realized by designing accessories and applications to operate with a support layer. The support layer can allow accessories to work on multiple devices without requiring development for the accessory of separate software programs to support the multiple devices. The support layer can also allow applications to interact with different accessories again without requiring development for the applications of separate interfaces to support the different accessories. | 2009-03-05 |
20090064203 | Color Management System that Enables Dynamic Balancing of Performance with Flexibility - A method and system for allowing a computer system platform the ability to intervene in the content workflow and perform additional color management based upon the content state and any color management policies in place is provided. Profile data from a source is converted to an intermediate color space upon entry into the platform at a choke point. In response to the current color content, profile data, and/or policy controls of the platform, color management input can be managed to change color management data immediately, change color management data at a later point, and/or ignore color management data. | 2009-03-05 |
20090064204 | Method for Using SNMP as an RPC Mechanism for Exporting the Data Structures of a Remote Library - A semi-automatic mapping of a library definition to a simple network management protocol (SNMP) management information base (MIB). By exposing the internal data needed to remotely access arbitrary user space libraries as SNMP data structures which can be directly modified over the network, the internal data, its operations, and usages operations can be modeled remotely. | 2009-03-05 |
20090064205 | SYSTEM AND METHOD FOR HARVESTING SERVICE METADATA FROM A METADATA REPOSITORY INTO AN ARCHITECTURE DIAGRAM - Embodiments of the invention are generally related to architecture diagrams and metadata repositories, particularly with regards to systems and methods for harvesting service metadata from a metadata repository into an architecture diagram. One embodiment includes a plug-in to an architecture design tool communicating to the service metadata repository through an application programming interface. One embodiment includes incorporating service metadata entities from a service metadata repository into architecture diagram entities in an architecture diagram. | 2009-03-05 |