44th week of 2015 patent applcation highlights part 46 |
Patent application number | Title | Published |
20150309731 | Dynamic Tuning of Memory in MapReduce Systems - Methods, systems, and computer program products for dynamic tuning of memory in MapReduce systems are provided herein. A method includes analyzing (i) memory usage of a first sub-set of multiple tasks associated with a MapReduce job and (ii) an amount of data utilized across the first sub-set of the multiple tasks; determining a memory size to be allocated to the first sub-set of the multiple tasks based on said analyzing, wherein said memory size minimizes a cost function related to said memory usage and said amount of data utilized; performing a task-wise performance comparison among a second sub-set of the multiple tasks associated with the MapReduce job using the determined memory size to be allocated to the first sub-set of the multiple tasks to generate a set of memory allocation results; and dynamically applying the set of memory allocation results to one or more additional tasks associated with the MapReduce job. | 2015-10-29 |
20150309732 | Selectively Configuring Hard-Disk Drive System - In one embodiment of the disclosure, a hard-disk drive (HDD) controller for an HDD system is selectively configurable to operate with a first type of host system having a first logical block size and a second type of host system having a second logical block size, different from the first logical block size. Another embodiment of the disclosure is a method implemented by the HDD system. | 2015-10-29 |
20150309733 | EFFICIENCY SETS IN A DISTRIBUTED SYSTEM - Disclosed are systems, computer-readable mediums, and methods for efficiency sets in a distributed system. A first efficiency set is determined for a first volume of data. Determining the first efficiency set includes selecting block identifiers for data blocks of the first volume, where each block identifier is used to access a particular data block corresponding to the first volume. Determining the first efficiency set further includes applying a mask to the selected block identifiers to mask at least one bit of each selected block identifier. The first efficiency set is compared to a second efficiency set for a second data store, and based on the comparison, an amount of unique data blocks of the first volume is approximated. | 2015-10-29 |
20150309734 | MULTIPLE LAYER OPTICAL DISC, AND DEVICE FOR WRITING SUCH DISC - A method of operating a medium access device includes writing by a writer information in a logical storage space of a storage medium which has a physical storage space comprising two or more layers of physical storage locations, each storage location having a physical address, the logical storage space comprising storage locations within a first layer of the layers and within a subsequent layer of the layers, the storage locations in the logical storage space having contiguously numbered logical addresses; storing in an address limit memory at least a value for a parameter indicating a maximum value of the logical addresses of the storage locations in the first layer; and changing by a processor the maximum value in the address limit memory and to provide an output when the maximum value cannot be changed to avoid attempting changing the maximum value. | 2015-10-29 |
20150309735 | TECHNIQUES FOR REDUCING READ I/O LATENCY IN VIRTUAL MACHINES - A computer implemented method for reducing the latency of an anticipated read of disk blocks from a swap file in a virtualized environment. First, the method identifies a sequence of disk blocks that was written in a guest swap file. The method then detects a first reference within the sequence of blocks that references a first disk block stored in a host swap file and a second reference within the sequence of blocks that references a second disk block stored in the host swap file. The method then moves the second disk block to a location in a host swap file that is adjacent to the first disk block. In some examples, the first block and second block are both moved to a new location in the host swap file where they are adjacent to one another. | 2015-10-29 |
20150309736 | TECHNIQUES FOR REDUCING READ I/O LATENCY IN VIRTUAL MACHINES - A computer implemented method for reducing the latency of an anticipated read of disk blocks from a swap file in a virtualized environment. The environment includes a host swap file maintained by a host operating system and a guest swap file maintained but a guest operating system. First, the method identifies a sequence of disk blocks that was written in the guest swap file. The method then detects within the sequence of blocks a first disk block that contains a reference to a second disk block that is stored in the host swap file. The method then replaces the first disk block in the guest swap file with the second disk block. | 2015-10-29 |
20150309737 | MEMORY SYSTEM AND METHOD OF OPERATING THE MEMORY SYSTEM - According to example embodiments, a memory system includes a memory device and a memory controller configured to control the memory device. The memory device includes a plurality of memory cells. The memory controller includes a storage unit configured to sequentially store a plurality of commands received from a host, a distance determination unit configured to determine a distance between a program command and a read command, associated with the same word line, from among the plurality of commands stored in the storage unit, and a read voltage determination unit configured to determine a read voltage level corresponding to the read command based on the determined distance. | 2015-10-29 |
20150309738 | ADAPTING TO PREDICTED CHANGES IN HOST TRANSMISSION RATES - In one embodiment, a method is provided for adapting a host transfer rate between a host and a tape drive to a medium transfer rate between the tape drive and a tape. A data compression rate of untransferred data in a buffer of a tape drive is measured. A change in a future host transfer rate is predicated based on the measured compression rate. A host transfer rate between a host and the tape drive is adapted to a medium transfer rate between the tape drive and a tape, based on the predicted change. | 2015-10-29 |
20150309739 | STORAGE DEVICE AND METHOD OF CONTROLLING STORAGE DEVICE - It is an object of the present invention to suppress inconsistency of mount information necessary for mounting a volume in a storage device having a primary volume and a secondary volume. | 2015-10-29 |
20150309740 | DATA DECOMPRESSION USING A CONSTRUCTION AREA - For serving sequential read patterns from a compressed journal storage system, a construction area cache algorithm is used to temporarily store the read and decompressed data in a user view sequential order to minimize disk I/Os and CPU utilization while serving the data to the user. | 2015-10-29 |
20150309741 | APPARATUSES AND METHODS FOR MEMORY MANAGEMENT - Some embodiments include apparatuses and methods to select a target memory portion in a first memory location to store information. One such method can conditionally store the information in a second memory location when the information is stored in the target memory portion. Other embodiments are described. | 2015-10-29 |
20150309742 | APPARATUS, SYSTEM, AND METHOD FOR NON-VOLATILE DATA STORAGE AND RETRIEVAL - A computer memory device and a method of storing data are provided. The computer memory device includes a parallel memory interface configured to be operatively coupled to a system memory controller, to receive data and commands including logical addresses from the system memory controller, and to transmit data to the system memory controller. The parallel memory interface is configured to respond to the commands from the storage device driver of a computer processing unit. The computer memory device further includes an address translation circuit configured to receive the logical addresses from the parallel memory interface and to translate the received logical addresses to corresponding physical addresses. The computer memory device further includes a non-volatile memory operatively coupled to the parallel memory interface and the address translation circuit. The non-volatile memory is configured to receive the physical addresses and the data and to store the data at memory locations of the non-volatile memory corresponding to the physical addresses. | 2015-10-29 |
20150309743 | SEMICONDUCTOR MEMORY DEVICES AND MEMORY SYSTEMS INCLUDING THE SAME - A semiconductor memory device includes a control logic and a memory cell array in which a plurality of memory cells are arranged. The memory cell array includes a plurality of bank arrays, and each of the plurality of bank arrays includes a plurality of sub-arrays. The control logic controls an access to the memory cell array based on a command and an address signal. The control logic dynamically sets a keep-away zone that includes a plurality of memory cell rows which are deactivated based on a first word-line when the first word-line is enabled. The first word-line is coupled to a first memory cell row of a first sub-array of the plurality of sub-arrays. Therefore, increased timing parameters may be compensated, and parallelism may be increased. | 2015-10-29 |
20150309744 | SEMICONDUCTOR STORAGE DEVICE AND CONTROL METHOD FOR SAME - A semiconductor storage device includes at least one memory from among a primary memory, a minor memory storing data corresponding to data stored in the primary memory, and a buffer memory; and a controller that controls the at least one memory so as to store data in the at least one memory and read data from the at least one memory. | 2015-10-29 |
20150309745 | PREDICTIVE POINT-IN-TIME COPY FOR STORAGE SYSTEMS - Method and system are provided for predictive point-in-time copy for storage systems. The method may include: recording a frequency of writes to an area of a storage volume; and prioritising areas for having point-in-time copies carried out based on the write frequency to an area, wherein areas in the storage volume having a high write frequency are prioritised before areas with a lower write frequency. An area may be of a coarser granularity than a region tracked for the point-in-time copy. The method may include: recording the frequency of writes to an area in a given period; and prioritising areas by their frequency of writes in the given period immediately prior to the point-in-time copy. | 2015-10-29 |
20150309746 | EFFICIENCY SETS IN A DISTRIBUTED SYSTEM - Disclosed are systems, computer-readable mediums, and methods for efficiency sets in a distributed system. A first efficiency set is determined for a first volume of data. Determining the first efficiency set includes selecting block identifiers for data blocks of the first volume, where each block identifier is used to access a particular data block corresponding to the first volume. Determining the first efficiency set further includes applying a mask to the selected block identifiers to mask at least one bit of each selected block identifier. The first efficiency set is compared to a second efficiency set for a second data store, and based on the comparison, an amount of unique data blocks of the first volume is approximated. | 2015-10-29 |
20150309747 | REPLICATING TRACKS FROM A FIRST STORAGE SITE TO A SECOND AND THIRD STORAGE SITES - Provided are a computer program product, system, and method for replicating tracks from a first storage to a second and third storages. A determination is made of a track in the first storage to transfer to the second storage as part of a point-in-time copy relationship and of a stride of tracks including the target track. The stride of tracks including the target track is staged from the first storage to a cache according to the point-in-time copy relationship. The staged stride is destaged from the cache to the second storage. The stride in the cache is transferred to the third storage as part of a mirror copy relationship. The stride of tracks in the cache is demoted in response to destaging the stride of the tracks in the cache to the second storage and transferring the stride of tracks in the cache to the third storage. | 2015-10-29 |
20150309748 | AUTONOMIC RECLAMATION PROCESSING ON SEQUENTIAL STORAGE MEDIA - Various embodiments for autonomic reclamation of data stored on at least one sequential storage media are provided. In one exemplary embodiment, active data is identified, read out, and stored in a sequential order by starting at a beginning block address of the at least one sequential storage media. At least one of a start address, an end address, and a data length of all original blocks of the active data in a backup application is defined. A new start address for each original block of active data to be written to the backup application is generated. A mapping is yielded and sent from the backup application to a sequential storage media device having the at least one sequential storage media, and the active data is read from each original block address in sequential order. | 2015-10-29 |
20150309749 | METHOD FOR SELECTIVELY PERFORMING A SECURE DATA ERASE TO ENSURE TIMELY ERASURE - A method and computer program product are provided to ensure a timely secure data erase by determining an erasure deadline for each physical volume of a plurality of physical volumes and calculating a remaining time for each physical volume. The remaining time is calculated for each physical volume by comparing a current date to the erasure deadline of each physical volume respectively. A secure data erase is performed on the plurality of physical volumes in an order based on the calculated remaining time, where the secure data erase is performed on the physical volume with a shortest calculated remaining time first. | 2015-10-29 |
20150309750 | TWO-STAGE READ/WRITE 3D ARCHITECTURE FOR MEMORY DEVICES - Some embodiments of the present disclosure relate to a memory device wherein a single memory cell array is partitioned between two or more tiers which are vertically integrated on a single substrate. The memory device also includes support circuitry including a control circuit configured to read and write data to the memory cells on each tier, and a shared input/output (I/O) architecture which is connected the memory cells within each tier and configured to receive input data word prior to a write operation, and further configured to provide output data word after a read operation. Other devices and methods are also disclosed. | 2015-10-29 |
20150309751 | Throttling Command Execution in Non-Volatile Memory Systems Based on Power Usage - A method of operation in a non-volatile memory system for deferring, in accordance with a determination to reduce power consumption by the non-volatile memory system, execution of commands in a command queue corresponding to a distinct set of non-volatile memory devices during a respective wait period. In some implementations, the respective wait period for a first distinct set of non-volatile memory devices in at least two distinct sets is at least partially non-overlapping with the respective wait period for a second distinct set of non-volatile memory devices in the at least two distinct sets. | 2015-10-29 |
20150309752 | Storage System Power Management Using Controlled Execution of Pending Memory Commands - The various embodiments described herein include methods and/or systems for throttling power in a storage device. In one aspect, a method of operation in a storage system includes obtaining a power metric corresponding to a count of active memory commands in the storage system, where active memory commands are commands being executed by the storage system. The method further includes, in accordance with a determination that the power metric satisfies one or more power thresholds, deferring execution of one or more pending memory commands. | 2015-10-29 |
20150309753 | Memory Control System for a Non-Volatile Memory and Control Method - A memory control system for controlling read and write operations of a non-volatile memory, wherein the memory control system comprises a memory controller that is adapted to implement a write operation for writing at least one block of data to the memory as a sequence of memory write and validation cycles for part of all of the data. In one example, the number of cycles is a function of the amount of successfully written data per cycle and is thus variable in dependence on the success of the data writing. The system also includes a power management unit, which is adapted to authorize or prevent the memory controller from conducting the write operation at the level of the write cycles thereby to control the timing of power consumption resulting from the cycles of the write operation. | 2015-10-29 |
20150309754 | System and Method for Erasing Data on an Electronic Device - A data erasing system and method for erasing the data on multiple electronic devices at a time, where the multiple electronic devices do not all have to be of the same type or connected at the same time, and where the electronic device's battery may be dead prior to erasure. | 2015-10-29 |
20150309755 | EFFICIENT COMPLEX NETWORK TRAFFIC MANAGEMENT IN A NON-UNIFORM MEMORY SYSTEM - A network appliance includes a first processor, a second processor, a first storage device, and a second storage device. A first status information is stored in the first storage device. The first processor is coupled to the first storage device. A queue of data is stored in the second storage device. The first status information indicates if traffic data stored in the queue of data is permitted to be transmitted. The second processor is coupled to the second storage device. The first processor communicates with the second processor. The traffic data includes packet information. The first storage device is a high speed memory only accessible to the first processor. The second storage device is a high capacity memory accessible to multiple processors. The first status information is a permitted bit that indicates if the traffic data within the queue of data is permitted to be transmitted. | 2015-10-29 |
20150309756 | IMAGE FORMING APPARATUS AND IMAGE FORMATION METHOD THAT SUPPRESS TEMPERATURE OF FIXING DEVICE - Provided is an image forming apparatus for shortening time to printout at time of receiving a print job in a sleep state when an image forming apparatus has an unmounting state of a storage device, such as HDD. A control part notifies printing preparation and a printing process to print engine based on a processing result of an analyzing part. At time of suppressing temperature of fixing device, if received data are analyzed and print data are generated, analyzing part sends a notice of first drawing object being generated to the control part. The control part confirms a connecting state of HDD in response to the notice. If HDD is in an unmounting state, the control part notifies the printing preparation to the print engine. | 2015-10-29 |
20150309757 | PRINTER INTERFACE FOR PRINTING DATA AND/OR RECEIPTS TO AND FROM HAND HELD DEVICES - Previous printers were designed to print a paper copy of data and/or receipts which causes a disconnect with modem day data manipulation. This printer interface can print data to and receive data from the internet and hand held devices which will open up extremely fast data exchange and data manipulation for consumers, cities, states and the federal government without the expense of having to purchase complete new systems. By simply changing out an old printer we can connect all old computer systems with modem day systems that now have the ability to manipulate data automatically. | 2015-10-29 |
20150309758 | IMAGE FORMATION DEVICE - An image formation device includes a document box, an operating section, a version check control section, a document data update control section, and a print control section. When receiving a select operation, the version check control section determines whether or not a selected piece of document data has been updated on an external server. If the selected data has been updated on the external server, the document data update control section downloads an updated piece of document data from the external server and updates the piece of document data in the document box. When receiving a print instruction operation, the print control section causes printing based on the piece of document data already stored in the document box if the selected piece of document data has not been updated or causes printing based on the updated piece of document data if the selected piece of document data has been updated. | 2015-10-29 |
20150309759 | TERMINAL APPARATUS, OUTPUT SYSTEM, AND OUTPUT METHOD - A terminal apparatus capable of communicating with an image forming apparatus includes an output management unit configured to manage first output data stored in a first output data storage part, the first output data being created based on data to be output, and being independent of the image forming apparatus; a second data creating unit configured to create second output data based on the first output data, the second output data being dependent on the image forming apparatus; an output data process unit configured to receive a second output data acquiring request to acquire the second output data from the image forming apparatus, and instruct the second data creating unit to create the second output data based on the first output data stored in the first output data storage part; and a transmitting unit configured to transmit the created second output data to the image forming apparatus. | 2015-10-29 |
20150309760 | DEVICES, SYSTEMS, AND METHODS FOR COMMUNICATING WITH AN IMAGE-FORMING DEVICE FROM A MOBILE DEVICE - Systems, devices, and methods for device communication receive, at a proxy device, an image of a barcode that was sent from a mobile device, wherein the barcode includes device information for an image-forming device, and wherein the device information identifies a network of the image-forming device; send the device information from the proxy device to one or more support devices; and at the one or more support devices, determine if the respective support device is connected to the network of the image-forming device, and in response to determining that the respective support device is connected to the network of the image-forming device, generate an output queue for the image-forming device on the support device that is connected to the network of the image-forming device. | 2015-10-29 |
20150309761 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND RECORDING MEDIUM - An information processing apparatus acquires connection information, which is used to communicate with an image processing apparatus via a first wireless communication complying with a first wireless communication standard, via a second wireless communication complying with a second wireless communication standard, transmits print data to the image processing apparatus via the first wireless communication, and performs control to delete the connection information in response to transmitting the print data to the image processing apparatus. | 2015-10-29 |
20150309762 | CONTENT RENDERING DEVICE - A display device for presenting content that includes a housing having an opening defined in a front portion, a display disposed in the opening and that presents content and/or content items, a mounting device disposed on a rear surface of the housing, a system for presenting content disposed inside the housing, and an attachment assembly that attaches to the mounting device and to an object for the purpose of displaying the content. | 2015-10-29 |
20150309763 | SYSTEM AND METHOD FOR IMAGE DISPLAY - An image display system includes a plurality of unit display devices, an image data buffer, and a location recognition unit. The location recognition unit recognizes the locations of the unit display devices and provides the image data buffer with location data. When the image data driver determines that the unit display devices are arranged in a first pattern, the plurality of unit display devices together displays a first image that corresponds to the first pattern. | 2015-10-29 |
20150309764 | MOBILE TERMINAL, DISPLAY CONTROL METHOD, AND PROGRAM - A mobile terminal provided with a plurality of display devices arranged such that frame parts surrounding display screens come in contact with each other and an image control section which switchably executes (i) a normal display mode where one display image is divided into display images according to the respective sizes of the display screens of the plurality of display devices and displayed on the plurality of display devices, and (ii) a complementary display mode where one display image is divided into display images according to the respective sizes of the display screens of the plurality of display devices with an image corresponding to a non-display portion formed by skipping the frame parts and displayed on the plurality of display devices. Accordingly, a sense of incongruity due to discontinuity of a displayed image by frame parts can be reduced and a lack of display information can be prevented. | 2015-10-29 |
20150309765 | INFORMATION SHARING SYSTEM, IMAGE PROCESSING APPARATUS, AND IMAGE PROCESSING METHOD - An information sharing system includes a first image processing apparatus and a second image processing apparatus connected thereto via a communication network. The first apparatus includes an input unit to input image data, a dividing unit to divide the data into divided image data items, a storage unit to store the divided image data items at respective destinations and output position data items indicating the destinations, a generation unit to generate notification data items each including a corresponding divided image data item and a corresponding position data item, and a transmission unit to sequentially transmit the notification data items to the second apparatus. The second apparatus includes a reception unit to receive the notification data items from the first apparatus and a reproduction unit to reproduce the image data by developing the divided image data items in order of receipt based on the data in the notification data items. | 2015-10-29 |
20150309766 | Displaying Regions of User Interest in Sharing Sessions - A meeting server facilitates an online conference session among a presenter device and a plurality of attendee devices, including a display of shared image data from the presenter device. The meeting server receives more than indication, with each indication associated with a corresponding portion of the shared image data displayed on each of the attendee devices. The indications are combined into a message representing user interest in areas of the shared image data, and the message representing interest is transmitted to the presenter device. | 2015-10-29 |
20150309767 | ADAPTIVE CONTROL OF AN AUDIO UNIT USING MOTION SENSING - A system for adaptive control of an audio unit associated with a vehicle includes an electronic key fob, wherein the electronic key fob includes a sensor adapted to detect a motion event imposed on the electronic key fob by a user and a controller coupled to the sensor and configured to produce a control signal in response to the motion event. The system further includes a receiver installed in the vehicle and adapted to receive the control signal and another controller installed in the vehicle and interconnected between the receiver and the audio unit. The vehicle-based controller utilizes the control signal to adjust a volume of a sound produced by the audio unit. | 2015-10-29 |
20150309768 | Preference Conversion - Embodiments are provided for preference conversion. An example embodiment may involve detecting a first input indicating a first preference for a first media provided by a first media provider. The first preference may correspond to a first preference type. The embodiment may further involve converting the first preference to a converted first preference. The converted first preference may correspond to a second preference type and the second preference type may correspond to the first media provider. The embodiment may further involve sending the converted first preference to the first media provider. | 2015-10-29 |
20150309769 | TASK MANAGEMENT INTEGRATED DESIGN ENVIRONMENT FOR COMPLEX DATA INTEGRATION APPLICATIONS - Embodiments presented herein provide task management capabilities for designing a complex data integration workflow in an integrated design environment (IDE). A task management tool of the IDE allows a developer to tag various stages of a data integration workflow in a non-linear manner. When the task management tool receives a tag for a given stage, the task management tool identifies incomplete tasks associated with the stage and generates a task list that includes the incomplete tasks. The developer may return to completing any of the tasks in the workflow in any sequence as desired. | 2015-10-29 |
20150309770 | Software Application Development Tool - A software development tool for use with external systems and services uses a common code base and defines all data and messages using XML Schema System components are defined which include a device abstraction layer which handles interactions between the application and devices. A host abstraction layer handles interactions between a host system and the application. A graphical tool models the work flow of the application and includes screens and services defined by Schema. The application is assembled using the graphical tool, declarative XML rules and customisations of system components without the user having to generate any coding. | 2015-10-29 |
20150309771 | UNIFIED FLOW DESIGNER - An interface enables a user to select a graphical object to include in a flow. The graphical object is associated with code, and this code may relate to presenting digital content. The interface further allows a user to define a graphical relationship in the flow, such as a connection between the graphical object and another element of the flow. The interface may present the flow in a first area of a display and the digital content in a second area of the display. The code may be executed based on the graphical relationship. For example, the graphical relationship may indicate an order for executing code sections associated with the flow and data exchanged within the code sections. | 2015-10-29 |
20150309772 | Declarative Software Application Meta-Model and System for Self-Modification - A solution providing for the dynamic design, use, and modification of models using a declarative software application meta-model that provides for self-modification of a collection of the models is provided. The solution can enable continuous real-time testing, simulation, deployment, and modification of the collection of the models. A model in the collection of the models can represent an entity or a function and can be included in a set of related models. Additionally, a set of related models can include a plurality of sets of related models. The collection of the models can represent, for example, one or more software applications, processes, and/or the like. | 2015-10-29 |
20150309773 | MOBILE MEDICAL APPLICATIONS WITH SEPARATED COMMUNICATION AND DEVELOPMENT ENVIRONMENT FOR THE SAME - Systems and methods are provided for a mobile medical application operating environment and automated/semi-automated systems for creating application software for the operating environment. In the operating environment, all data storage and communication with external devices relating to sensitive medical data and operations is handled by a data manager application concurrently running with the medical application on a mobile device. Multiple medical applications can be run concurrently on the mobile device with reduced risk of data failure, thereby simplifying the design and release process for mobile medical applications. | 2015-10-29 |
20150309774 | METHOD AND DEVICE FOR CHANGING OBJECTS IN A HUMAN-MACHINE INTERFACE DEVICE - When a programmer creates an object for use in a display screen of a human-machine interface device of a programmable system, at least some of the properties of the object are associated with a variable quantity. The programmer determines the property or properties for which the corresponding variable quantity is reassignable and which are fixed, by carrying out a setting operation, and creating an association record identifying which property or properties have a reassignable variable quantity. The object may then be stored in a library and transferred to the memory of the human-machine interface device. If a subsequent programmer wants to re-use the object by reassigning the variable quantity of one or more of the properties of the object, the association record is used to determine the property or properties for which the corresponding variable quantity is reassignable. | 2015-10-29 |
20150309775 | APPARATUS FOR SITUATIONAL COGNITION AND POSITION DETERMINATION OF SCREEN OBJECT IN PROGRAM DEVELOPMENT, AND METHOD THEREFOR - A screen object positioning device and a screen object positioning method in program development are provided. The screen object positioning device includes: a parent object generating unit configured to generate a parent object having predetermined first object position information; a first child object generating unit configured to generate a first child object which is placed on the parent object and which has second object position information corresponding to a first positioning rule calculated using the first object position information; and a second child object generating unit configured to generate a second child object which is placed on the parent object and which has third object position information corresponding to a second positioning rule calculated using the second object position information. The screen object positioning device can position a child object placed on a parent object in consideration of a situation based on information of the parent object in program development. | 2015-10-29 |
20150309776 | IDENTIFYING POTENTIALLY UNINITIALIZED SOURCE CODE VARIABLES - Computer program source code is represented by nodes in a control flow graph. A set of target nodes is identified, where each node in the set of target nodes includes at least one line of source code that defines a modification to a particular variable used in the computer program. A usage score relating to the variable is calculated for each target node. Each usage score is then recalculated based on the earlier scores and also based on the modifications to the variable that are defined by the lines of source code. Each recalculated score is compared to its corresponding earlier score, and if any score has changed, then the process repeats. Scores are recalculated based on the most recently calculated scores until the scores stop changing. The final scores may then be displayed. | 2015-10-29 |
20150309777 | METHOD OF CALL CONTEXT ENCODING - The present invention provides methods, systems and computer-program products in support of dynamic calling context encoding, in which call graph evolution is recorded in parallel with call events. In part, this can enable a calling context to be encoded on the fly at a low processing overhead without advance knowledge of the complete call graph. | 2015-10-29 |
20150309778 | SYSTEMS AND METHODS FOR APPROXIMATION BASED OPTIMIZATION OF DATA PROCESSORS - A compilation system can apply a smoothness constraint to the arguments of a compute-bound function invoked in a software program, to ensure that the value(s) of one or more function arguments are within specified respective threshold(s) from selected nominal value(s). If the constraint is satisfied, the function invocation is replaced with an approximation thereof. The smoothness constraint may be determined for a range of value(s) of function argument(s) so as to determine a neighborhood within which the function can be replaced with an approximation thereof. The replacement of the function with an approximation thereof can facilitate simultaneous optimization of computation accuracy, performance, and energy/power consumption. | 2015-10-29 |
20150309779 | SYSTEMS AND METHODS FOR POWER OPTIMIZATION OF PROCESSORS - A compilation system generates one or more energy windows in a program to be executed on a data processors such that power/energy consumption of the data processor can be adjusted in which window, so as to minimize the overall power/energy consumption of the data processor during the execution of the program. The size(s) of the energy window(s) and/or power option(s) in each window can be determined according to one or more parameters of the data processor and/or one or more characteristics of the energy window(s). | 2015-10-29 |
20150309780 | COMPUTER-IMPLEMENTED METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR DEPLOYING AN APPLICATION ON A COMPUTING RESOURCE - A computer-implemented method for deploying an application on a computing resource includes: defining sets of groups of tenants for categorizing the plurality of tenants; assigning each tenant to at least one group of tenants; providing a deployment model for each combination of an application component of the plurality of application components and a tenant; determining constraint information for each combination of application component and tenant depending on the deployment model, wherein the deployment model is configured to enable each tenant to include and/or exclude entire groups of tenants from sharing one or more application components and/or infrastructure of the computing resource; determining a valid deployment configuration of the application depending on the constraint information associated with each application component; and deploying the application on the computing resource accordingly. | 2015-10-29 |
20150309781 | COMMAND LINES - Software is installed and/or un-installed in networks. Each of a plurality of networks has a network management system storing metadata comprising at least the identities and command lines of software installed using installation systems of the management systems. On each network the network management system of the network is accessed to obtaining the metadata of items of software run on the network. That metadata is sent to a server which serves all the networks. At the server, a comparison is done to compare the metadata of instances of the same software on different networks. For those instances of the same software having the same metadata on different networks, the metadata is storing in a database. The networks use the metadata stored in the database to automatically install or un-install software. | 2015-10-29 |
20150309782 | APPLICATION PROGRAM DOWNLOAD AND UPDATE METHOD FOR VEHICLE DEVICE - The present invention relates to an application program download and update method for vehicle device. The method includes installing a proprietary micro-management application program in a mobile device, and when a network server has an update file for a vehicle device, the network server pushes an update note to the mobile device. The user is then able to download the update file to the mobile device and, through a connection between the mobile device and the vehicle device, transmit the update file from the mobile device to the vehicle device to run a program update of the vehicle device. The vehicle owner is thus able to download the update program to update the vehicle device himself, thereby eliminating the formality and inconvenience of having to return the vehicle for servicing. | 2015-10-29 |
20150309783 | DYNAMIC UPDATING OF OPERATING SYSTEMS AND APPLICATIONS USING VOLUME ATTACHMENT - Examples disclosed herein provide systems, methods, and software to attach updated applications to computing devices. In one instance, a method of attaching updated applications to a computing device includes identifying an application update for an application stored on the computing device, and determining an updated application volume containing an updated version of the application. The method further includes mounting the updated application volume to the computing device, and overlaying the updated version of the application with the application stored on the computing device. | 2015-10-29 |
20150309784 | METHODS AND APPARATUS FOR UPDATING SOFTWARE COMPONENTS IN COORDINATION WITH OPERATIONAL MODES OF A MOTOR VEHICLE - Methods and apparatus are provided for updating at least one software component of a motor vehicle in coordination with predetermined safe operational modes of the vehicle permitting the updating without danger to a driver operating the motor vehicle. The method operates such that a receiver circuit of a hub controller of the motor vehicle receives and stores a software update module in a memory of the hub controller. A processor of the hub controller determines an operational condition of the motor vehicle and selectively updates at least one software component of the motor vehicle with the software update module responsive to the operational condition of the motor vehicle being in a predetermined safe operational mode permitting the updating without danger to a driver operating the motor vehicle. Preferably, the updating of the at least one software component with the software update module takes place only during DPF regeneration. | 2015-10-29 |
20150309785 | Enhanced Upgrade Path - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for establishing upgrade paths. In one aspect, a method includes establishing an interim environment and platform, migrating the data from the legacy environment and platform to the interim environment and platform, and migrating the data from the interim environment and platform to the upgraded environment and platform. | 2015-10-29 |
20150309786 | DISTRIBUTED STORAGE NETWORK FOR MODIFICATION OF A DATA OBJECT - A method for updating software in storage units of a dispersed storage network includes determining a software updating sequence pattern, which insures that, for each set of encoded data slices, a decode threshold number of encoded data slices is accessible. The method includes taking a set of the storage units off-line for software updating in accordance with the software updating sequence pattern. The method includes, when the software has been successfully updated in the set of storage units, putting the set of storage units back on-line and taking another set of the storage units off-line in accordance with the software updating sequence pattern. The method includes, when the software has been successfully updated in the other set of storage units, putting the other set of storage units back on-line and taking yet another set of the storage units off-line for software updating in accordance with the software updating sequence pattern. | 2015-10-29 |
20150309787 | Packaging Content Updates - Aspects of the present disclosure are directed to obtaining user feedback and causing a package of content updates to be created and distributed based on the received feedback. In accordance with one embodiment, a method is provided for creating a package that contains one or more content updates that are configured for implementation on a remote device. | 2015-10-29 |
20150309788 | FUNCTION MODULE MODULARIZING METHOD IN DATA DISTRIBUTION SERVICE AND MODULARIZING APPARATUS THEREOF - Disclosed is a modularizing apparatus modularizing a function module in DDS middleware, including: a DCPS module providing an interface with an application program; and a library module initializing and creating the function module, classifying the created function module for each function and storing the classified function module, and providing a function module corresponding to a request by the DCPS module to the DCPS module. | 2015-10-29 |
20150309789 | MODIFYING MOBILE APPLICATION BINARIES TO CALL EXTERNAL LIBRARIES - A method includes determining a system library method based on a configuration file in an application library. The method also includes generating a wrapper method for the system library method, wherein the wrapper method includes a first instruction to invoke the system library method, and a second instruction to invoke a method in an external library. The method further includes replacing a third instruction that invokes the system library method with a fourth instruction that invokes the wrapper method. A binary class in a plurality of binary classes in the application library comprises the third instruction. | 2015-10-29 |
20150309790 | SOURCE CODE VIOLATION MATCHING AND ATTRIBUTION - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for matching and attributing code violations. One of the methods includes receiving a snapshot S of a code base of source code and a different snapshot T of the code base. Data representing first violations in the snapshot S and second violations in the snapshot T is received. Pairs of matching violations are determined using performing two or more matching processes, including performing a first matching process, the first matching process determining first pairs of matching violations according to a first matching algorithm and performing a second matching process, the second matching process determining second pairs of matching violations according to a second matching algorithm from violations not matched by the first matching process. The first pairs of matching violations and the second pairs of matching violations are included in the determined pairs of matching violations. | 2015-10-29 |
20150309791 | DYNAMICALLY RECOMMENDING CHANGES TO AN ASSOCIATION BETWEEN AN OPERATING SYSTEM IMAGE AND AN UPDATE GROUP - Dynamically recommending changes to an association between an operating system image and an update group includes monitoring a configuration of a deployed copy of a first master operating system (OS) image; detecting a modification in the configuration of the deployed copy; determining that the configuration of the deployed copy with the modification more closely matches a configuration of a second master OS image than a configuration of the first master OS image; in response to determining that the configuration of the deployed copy with the modification more closely matches the configuration of the second master OS image, generating an association recommendation that recommends associating the deployed copy with a second update group of the second master OS image; and associating the deployed copy with the second update group of the second master OS image instead of the first update group of the first master OS image. | 2015-10-29 |
20150309792 | REDUCING LATENCY FOR POINTER CHASING LOADS - Systems, methods, and apparatuses for reducing the load to load/store address latency in an out-of-order processor. When a producer load is detected in the processor pipeline, the processor predicts whether the producer load is going to hit in the store queue. If the producer load is predicted not to hit in the store queue, then a dependent load or store can be issued early. The result data of the producer load is then bypassed forward from the data cache directly to the address generation unit. This result data is then used to generate an address for the dependent load or store, reducing the latency of the dependent load or store by one clock cycle. | 2015-10-29 |
20150309793 | RESOURCE LOCKING FOR LOAD STORE SCHEUDLING IN A VLIW PROCESSOR - A load/store unit including a memory queue configured to store a plurality of memory instructions and state information indicating whether each memory instruction of the plurality of memory instructions can be performed independently, with, separately, or after older pending instructions; and a state-selection circuit configured to set a state information of each memory instruction of the plurality of memory instructions in view of an older pending instruction in the memory queue. | 2015-10-29 |
20150309794 | BRANCH PREDICTION - A method and system for branch prediction are provided herein. The method includes executing a program, wherein the program comprising multiple procedures, and setting bits in a taken branch history register to indicate whether a branch is taken or not taken during execution of instructions in the program. The method further includes the steps of calling a procedure in the program and overwriting, responsive to calling the procedure, the contents of the taken branch history register to a start address for the procedure. | 2015-10-29 |
20150309795 | ZERO OVERHEAD LOOP - A method and apparatus for zero overheard loops is provided herein. The method includes the steps of identifying, by a decoder, a loop instruction and identifying, by the decoder, a last instruction in a loop body that corresponds to the loop instruction. The method further includes the steps of generating, by the decoder, a branch instruction that returns execution to a beginning of the loop body, and enqueing, by the decoder, the branch instruction into a branch reservation queue concurrently with an enqueing of the last instruction in a reservation queue. | 2015-10-29 |
20150309796 | RENAMING WITH GENERATION NUMBERS - A processor including a register file having a plurality of registers, and configured for out-of-order instruction execution, further includes a renamer unit that produces generation numbers that are associated with register file addresses to provide a renamed version of a register that is temporally offset from an existing version of that register rather than assigning a non-programmer-visible physical register as the renamed register. | 2015-10-29 |
20150309797 | Computer Processor With Generation Renaming - A processor including a register file having a plurality of registers, and configured for out-of-order instruction execution, further includes a renamer unit that produces generation numbers that are associated with register file addresses to provide a renamed version of a register that is temporally offset from an existing version of that register rather than assigning a non-programmer-visible physical register as the renamed register. The processor includes a small reset DHL Gshare branch prediction unit coupled to an instruction cache and configured to provide speculative addresses to the instruction cache. | 2015-10-29 |
20150309798 | REGISTER RESOURCE LOCKING IN A VLIW PROCESSOR - An apparatus including a queue configured to store a plurality of instructions and state information indicating whether each instruction of the plurality of instructions can be performed independently of older pending instructions; and a state-selection circuit configured to set a state information of each instruction of the plurality of instructions ifs view of an older pending instruction in the queue. | 2015-10-29 |
20150309799 | STUNT BOX - A processor including a stunt box with an intermediate storage, including a plurality of registers, configured to store a plurality of execution pipe results as a plurality of intermediate results; a storage, communicatively coupled to the intermediate storage, configured to store a plurality of storage results which may include one or more of the plurality of intermediate results; and an arbiter, communicatively coupled to the intermediate storage and the storage, configured to receive the plurality of execution pipe results, the plurality of intermediate results, and the plurality of storage results and to select an output to retire from of the plurality of results, the plurality of intermediate results, and the plurality of storage results. | 2015-10-29 |
20150309800 | Instruction That Performs A Scatter Write - A processor is described having an instruction execution pipeline. The instruction execution pipeline has an instruction fetch stage to fetch an instruction specifying multiple target resultant registers. The instruction execution pipeline has an instruction decode stage to decode the instruction. The instruction execution pipeline has a functional unit to prepare resultant content specific to each of the multiple target resultant registers. The instruction execution pipeline has a write-back stage to write back said resultant content specific to each of said multiple target resultant registers. | 2015-10-29 |
20150309801 | Electronic Device and Method for Data Processing Using Virtual Register Mode - The invention relates to an electronic device for data processing, which includes an execution unit with a temporary register, a register file, a first feedback path from the data output of the execution unit to the register file, a second feedback path from the data output of the execution unit to the temporary register, a switch configured to connect the first feedback path and/or the second feedback path, and a logic stage coupled to control the switch. The control stage is configured to control the switch to connect the second feedback path if the data output of an execution unit is used as an operand in the subsequent operation of an execution unit. | 2015-10-29 |
20150309802 | SYSTEM AND METHODS FOR DYNAMIC MANAGEMENT OF HARDWARE RESOURCES - A dynamically reconfigurable framework manages processing applications in order to meet time-varying constraints to select an optimal hardware architecture. The optimal architecture satisfies time-varying constraints including for example, supplied power, required performance, accuracy levels, available bandwidth, and quality of output such as image reconstruction. The process of determining an optimal solution is defined in terms of multi-objective optimization using Pareto-optimal realizations. | 2015-10-29 |
20150309803 | METHOD AND APPARATUS FOR BOOTING PROCESSOR - A fail-safe booting system suitable for a system-on-chip (SOC) automatically detects and rectifies failures in power-on reset (POR) configuration or boot loader fetch operations. If a failure due to a boot loader fetch occurs, a POR configuration and boot loader are fetched from a different non-volatile memory. The reloading takes place from further different non-volatile memory sources if the boot loader fetch fails again. The automated system operates in accordance with a state machine, and does not involve any manual, on-board switch selection or manual re-programming. | 2015-10-29 |
20150309804 | DECOALESCING RESOURCE UTILIZATION AT BOOT - An embodiment provides a method, including: in a system, determining a set of processes which run at system boot; monitoring the processes at system boot for system resource utilization; categorizing processes of the set of processes based on said monitoring; and changing a start time during boot of at least one process based on said categorizing. Other aspects are described and claimed. | 2015-10-29 |
20150309805 | Booting a Physical Device Using Custom-Created Frozen Partially-Booted Virtual Machines - In one embodiment, a physical device (e.g., packet switching device, computer, server) is booted using custom-created frozen partially-booted virtual machines, avoiding the time required for an end-to-end boot process. In one embodiment while the system is operating under a current version, a partially-booted virtual image of a new operating version for each of multiple processing elements of the device is produced according to static configuration information specific to the device, with each of these partially-booted virtual machines frozen. The device is rebooted to a fully operational device by unfreezing these partially-booted virtual machines, thus removing this portion of a boot process from the real-time booting of the device. The generation of the frozen partially-booted virtual machines is advantageously performed by the device itself based on current static configuration information and the availability of the specific hardware configuration of the device. | 2015-10-29 |
20150309806 | DISPLAY APPARATUS AND CONTROLLING METHOD THEREOF - A display apparatus and a controlling method thereof are provided. The controlling method of a display apparatus includes receiving a power-off command through a control apparatus to control the display apparatus, storing image content information and identification information, the image content information being about an image content which is displayed by the display apparatus at a time at which the power-off command is input, and the identification information being about the control apparatus, in response to a power-on command being input, determining whether information included in the power-on command matches the stored identification information, and in response to the information included in the power-on command matching the stored identification information, displaying the stored image content information. | 2015-10-29 |
20150309807 | METHOD AND APPARATUS FOR WAKING DEVICE FROM POWER SAVE MODE - A method for waking a device from a power save mode to an active mode includes: transmitting, by a first device, a magic packet through a predetermined channel to a second device, which operates in the power save mode by repeating a doze state and an awake state according to a predetermined period of time, for notifying the second device to switch to the active mode; and retransmitting the magic packet to the second device through the predetermined channel if a response to the magic packet has not been received from the second device and a predetermined time has not elapsed after transmitting the magic packet through the predetermined channel. | 2015-10-29 |
20150309808 | Method and System on Chip (SoC) for Adapting a Reconfigurable Hardware for an Application in Runtime - A method and System on Chip (SoC) for adapting a reconfigurable hardware for an application kernel at run time is provided. The method includes obtaining a plurality of Hyper-Operations corresponding to the application. A Hyper-Operation performs one or more of a plurality of MIMO functions of the application. The method further includes retrieving compute metadata and transport metadata corresponding to each Hyper-Operation. Compute metadata specifies functionality of a Hyper-Operation and transport metadata specifies data flow path of a Hyper-Operation. Thereafter, the method maps each Hyper-Operation to a corresponding set of tiles in the hardware. The set of tiles includes one or more tiles and a tile performs one or more of the plurality of MIMO functions of the application. | 2015-10-29 |
20150309809 | ELECTRONIC DEVICE AND METHOD OF LINKING A TASK THEREOF - A method of linking a task of an electronic device and the electronic device are provided. The method includes determining whether generation of an event satisfying a predetermined condition is detected; selecting another electronic device that is linkable to the electronic device when the generation of the event satisfying the predetermined condition is detected; and generating task environment information of an application and transmitting the task environment information to the other selected electronic device. | 2015-10-29 |
20150309810 | GLOBAL ENTRY POINT AND LOCAL ENTRY POINT FOR CALLEE FUNCTION - Embodiments relate to a global entry point and a local entry point for a callee function. An aspect includes executing, by a processor, a function call from a calling function to the callee function. Another aspect includes, based on the function call being a direct and external function call, entering the callee function at the global entry point and executing prologue code in the callee function that calculates and stores a table of contents (TOC) value for the callee function in a TOC register. Another aspect includes, based on the function call being a direct and local function call, entering the callee function at the local entry point, wherein entering the callee function at the local entry point skips the prologue code. Another aspect includes, based on the function call being an indirect function call, entering the callee function at the global entry point and executing the prologue code. | 2015-10-29 |
20150309811 | Modifying an Application for Managed Execution - Methods and systems for configuring mobile applications for managed execution are described herein. Executable application binaries may each be converted into a corresponding dynamic library. The dynamic libraries may be bundled with a managing application that is configured to manage execution of the dynamic libraries at a mobile computing device. Resource files consumed by the application binary may also be bundled with the managing application and accessible to the dynamic libraries during execution. The managing application may provide a workspace within which operation of the dynamic library occurs. Operation of the dynamic library may at least partially correspond to operation of the executable application binary. Execution of the dynamic library may be bound to a process that is executed for the managing application at a processor of a computing device. | 2015-10-29 |
20150309812 | GLOBAL ENTRY POINT AND LOCAL ENTRY POINT FOR CALLEE FUNCTION - Embodiments relate to a global entry point and a local entry point for a callee function. An aspect includes executing, by a processor, a function call from a calling function to the callee function. Another aspect includes, based on the function call being a direct and external function call, entering the callee function at the global entry point and executing prologue code in the callee function that calculates and stores a table of contents (TOC) value for the callee function in a TOC register. Another aspect includes, based on the function call being a direct and local function call, entering the callee function at the local entry point, wherein entering the callee function at the local entry point skips the prologue code. Another aspect includes, based on the function call being an indirect function call, entering the callee function at the global entry point and executing the prologue code. | 2015-10-29 |
20150309813 | A System for analyzing applications in order to find security and quality issues - The present invention relates to field of application and more specifically to analysis of applications for determining security and quality issues. The present invention describes an application analysis system providing a platform for analyzing applications which is useful in finding security and quality issues in an application. In particular, the present invention is composed of an advanced fusion analyzer which gains an understanding of the application behavior by using a multi-way coordination and orchestration across components used in the present invention to build an continuously refine a model representing knowledge and behavior of the application as a large network of objects across different dimensions and using reasoning and learning logic on this model along with information and events received from the components to both refine and model further as well as drive the components further by sending information and events to them and again using the information and events received as a result to further trigger the entire process until the system stabilizes. The present invention is useful in analysis of internet/intranet based web applications, desktop applications, mobile applications and also embedded systems as well as for hardware, equipment and machines controlled by software. | 2015-10-29 |
20150309814 | APPARATUS AND METHOD FOR VIRTUAL HOME SERVICE - The present invention provide an apparatus and method for virtual home service. The virtual home service apparatus comprises a physical device registration unit registering or deleting physical devices present in home and controlling the registered physical devices; a virtual device generation unit generating virtual devices using the registered physical device; a context reasoning unit reasoning in-home context using information of the generated virtual devices; and a service control unit generating a virtual device control command according to the result of the reasoned in-home context and delivering the result to the virtual device generation unit, wherein the virtual device generation unit generates hybrid virtual devices comprising a plurality of physical devices according to input of a user's virtual device generation command including physical device configuration information of the hybrid virtual devices or generates a single virtual device corresponding one-to-one to the physical device according to input of profile information of the physical device. | 2015-10-29 |
20150309815 | AUGMENTING PROFILE DATA WITH INFORMATION GATHERED FROM A JIT COMPILER - A method, executed by a computer, for augmenting a first performance profile with data extracted from a Just-in-Time compiler, the Just-in-Time compiler compiling bytecodes into machine instructions and generating the first performance profile, the bytecodes having an associated original call structure includes: tracking “in-lining” optimizations performed by a Just-in-Time compiler compiling bytecodes into machine instructions; extracting data associated with the tracked “in-lining” optimizations; storing the extracted data in a second profile; and augmenting the first performance profile with the extracted data associated with the tracked “in-lining” optimizations, the extracted data comprising call paths corresponding to the original call structure associated with the bytecodes. A corresponding computer program product and computer system are also disclosed herein. | 2015-10-29 |
20150309816 | ADMINISTERING VIRTUAL MACHINES IN A DISTRIBUTED COMPUTING ENVIRONMENT - In a distributed computing environment that includes hosts which each execute a VMM, with each VMM supporting execution of one or more VMs, administering a the VMs may include: assigning, by a VMM manager, the VMMs of the distributed computing environment to a logical tree topology, including assigning one of the VMMs as a root VMM of the tree topology; and executing, amongst the VMMs of the tree topology, an allgather operation, including: sending, by the root VMM, to other VMMs in the tree topology, a request to retrieve VMs supported by the other VMMs; pausing, by each of the other VMMs, a VM supported by the VMM; providing, by each of the other VMMs as a response to the root VMM's request, the paused VM; and broadcasting, by the root VM to the other VMMs as a set of VMs, the received VMs. | 2015-10-29 |
20150309817 | ADMINISTERING VIRTUAL MACHINES IN A DISTRIBUTED COMPUTING ENVIRONMENT - In a distributed computing environment that includes hosts that execute a VMM, where each VMM supports execution of one or more VMs, administering VMs may include: assigning, by a VMM manager, the VMMs of the distributed computing environment to a logical tree topology, including assigning one of the VMMs as a root VMM of the tree topology; and executing, amongst the VMMs of the tree topology, a broadcast operation, including: pausing, by the root VMM, execution of one or more VMs supported by the root VMM; sending, by the root VMM, to other VMMs in the tree topology, a message indicating a pending transfer of the paused VMs; and transferring the paused VMs from the root VMM to the other VMMs. | 2015-10-29 |
20150309818 | METHOD OF VIRTUAL MACHINE MIGRATION USING SOFTWARE DEFINED NETWORKING - The present invention relates to a method of virtual machine migration, which uses the protocol of the software defined networking technology. When a virtual machine is migrated across domains, the local controller will be notified rapidly for submitting the information of the virtual machine to the switch in advance. Thereby, without modifying the network configuration, the migrated virtual machine can provide service continuously; the optimal routing is achieved and thus improving the problem of triangle routing effectively. | 2015-10-29 |
20150309819 | CORRELATING A UNIQUE IDENTIFIER OF AN INDEPENDENT SERVER NODE WITH A LOCATION IN A PRE-CONFIGURED HYPER-CONVERGED COMPUTING DEVICE - A pre-configured hyper-converged computing device for supporting a virtualization infrastructure includes a first independent server node at a first location comprising a first server node unique identifier, a second independent server node at a second location comprising a second server node unique identifier. The first server node unique identifier correlates to the first location. The second server node unique identifier correlates to the second location such that an exact location of the first or second independent server node are determined within the pre-configured hyper-converged computing device. | 2015-10-29 |
20150309820 | Mobile Device With Virtual Interfaces - Mobile devices, systems and methods are described with a plurality of virtual machines, wherein each virtual machine executes a separate virtual interface, or guest operating system. Each guest operating system corresponds to a different virtual device having its own contact list, applications, and so on. A virtual “device” can be controlled by an employer or service provider, and is a secure space that provides authenticated applications that are walled off from another virtual device. A host operating system provides a hardware abstraction layer. A proxy server on the host operating system receives an incoming signal from a remote device on the external network, and routes the incoming signal to one of the first and second virtual machines based on a call context. A method and computer program product for providing a plurality of virtual interfaces on a mobile device are also disclosed. | 2015-10-29 |
20150309821 | ADMINISTERING VIRTUAL MACHINES IN A DISTRIBUTED COMPUTING ENVIRONMENT - In a distributed computing environment that includes which each execute a VMM, where each VMM supports execution of one or more VMs, administering the VMs may include: assigning, by a VMM manager, the VMMs of the distributed computing environment to a logical tree topology, including assigning one of the VMMs as a root VMM of the tree topology; and executing, amongst the VMMs of the tree topology, a gather operation, including: sending, by the root VMM, to other VMMs in the tree topology, a request to retrieve one or more VMs supported by the other VMMs; pausing, by the other VMMs, each VM requested to be retrieved; and providing, by the other VMMs to the root VMM, the VMs requested to be retrieved. | 2015-10-29 |
20150309822 | ADMINISTERING VIRTUAL MACHINES IN A DISTRIBUTED COMPUTING ENVIRONMENT - Administering VMs in a distributed computing environment that includes hosts that execute a VMM, with each VMM supporting execution of one or more VMs, includes: assigning the VMMs to a logical tree topology with one as a root; and executing, by the VMMs of the tree topology, a reduce operation, including: sending, by the root VMM to each of other VMMs of the tree topology, a request for an instance of a particular VM; pausing, by each of the other VMMs, the requested instance of the particular VM; providing, by each of the other VMMs to the root VMM in response to the root VMM's request, the requested instance of the particular VM; and identifying, by the root VMM, differences among the requested instances of the particular VM including, performing a bitwise XOR operation amongst the instances of the particular VM. | 2015-10-29 |
20150309823 | ADMINISTERING VIRTUAL MACHINES IN A DISTRIBUTED COMPUTING ENVIRONMENT - In a distributed computing environment that includes hosts that execute a VMM, with each VMM supporting execution of one or more VMs, administering VMs may include: assigning, by a VMM manager, the VMMs of the distributed computing environment to a logical tree topology, including assigning one of the VMMs as a root VMM of the tree topology; and executing, amongst the VMMs of the tree topology, a scatter operation, including: pausing, by the root VMM one or more executing VMs; storing, by the root VMM in a buffer, a plurality of VMs to scatter amongst the other VMMs of the tree topology; and sending, by the root VMM, to each of the other VMMs of the tree topology a different one of the VMs stored in the buffer. | 2015-10-29 |
20150309824 | ADMINISTERING VIRTUAL MACHINES IN A DISTRIBUTED COMPUTING ENVIRONMENT - In a distributed computing environment that includes hosts that execute a VMM, where each VMM supports execution of one or more VMs, administering VMs may include: assigning, by a VMM manager, the VMMs of the distributed computing environment to a logical tree topology, including assigning one of the VMMs as a root VMM of the tree topology; and executing, amongst the VMMs of the tree topology, a broadcast operation, including: pausing, by the root VMM, execution of one or more VMs supported by the root VMM; sending, by the root VMM, to other VMMs in the tree topology, a message indicating a pending transfer of the paused VMs; and transferring the paused VMs from the root VMM to the other VMMs. | 2015-10-29 |
20150309825 | METHOD AND SYSTEM FOR SUPPORTING A CHANGE IN STATE WITHIN A CLUSTER OF HOST COMPUTERS THAT RUN VIRTUAL MACHINES - A method for supporting a change in state within a cluster of host computers that run virtual machines is disclosed. The method involves identifying a change in state within a cluster of host computers that run virtual machines, determining if predefined criteria for available resources within the cluster of host computers can be met by resources available in the cluster of host computers, and determining if predefined criteria for available resources within the cluster of host computers can be maintained after at least one different predefined change in state. In an embodiment, the steps of this method may be implemented in a non-transitory computer-readable storage medium having instructions that, when executed in a computing device, causes the computing device to carry out the steps. | 2015-10-29 |
20150309826 | METHOD AND SYSTEM FOR GENERATING REMEDIATION OPTIONS WITHIN A CLUSTER OF HOST COMPUTERS THAT RUN VIRTUAL MACHINES - A method for adjusting the configuration of host computers in a cluster on which virtual machines are running in response to a failed change in state is disclosed. The method involves receiving at least one reason a change in state failed the present check or the future check, associating the at least one reason with at least one remediation action, wherein the remediation action would allow the change in state to pass both a present check and a future check, assigning the at least one remediation action a cost, and determining a set of remediation actions to perform based on the cost assigned to each remediation action. In an embodiment, the steps of this method may be implemented in a non-transitory computer-readable storage medium having instructions that, when executed in a computing device, causes the computing device to carry out the steps. | 2015-10-29 |
20150309827 | CONVERTING VIRTUAL MACHINE I/O REQUESTS - Systems, computer readable mediums, and techniques are described for converting virtual machine input/output (I/O) requests. One of the techniques includes obtaining access request data for one or more virtual machines (VMs) executing on a physical machine, wherein the access request data characterizes data access requests received from the one or more VMs; classifying, using the access request data, each of the one or more VMs as having either a sequential data access pattern or a random data access pattern; receiving a first I/O request packet from a first VM of the one or more VMs; determining that the first VM has been classified as having a random data access pattern; and splitting the first I/O request packet into a plurality of second I/O request packets based at least in part on determining that the first VM has been classified as having a random data access pattern. | 2015-10-29 |
20150309828 | HYPERVISOR MANAGER FOR VIRTUAL MACHINE MANAGEMENT - Adaptive virtual servers with hypervisor managers may be used to manage several hypervisors, including hypervisors of different types. An adaptive virtual server may monitor resource utilization of virtual machines and dynamically assign resources to the virtual machines. Dynamic allocation of resources may improve efficiency for usage of available resources and improve performance of the virtual machines. Further, an adaptive virtual server may allocate resources to a virtual machine from multiple hypervisors, including hypervisors of different types. | 2015-10-29 |
20150309829 | PROVIDING EXCESS COMPUTE RESOURCES WITH VIRTUALIZATION - A main operating system interface engine can be configured to receive instructions from a main operating system of one or more host systems and can manage a virtualized operating system on the one or more host systems, the virtualized operating system appearing distinct from the main operating system to a user of the one or more host systems. A virtualization environment management engine can manage a virtualization environment, the virtualization environment using the virtualized operating system. A virtual machine management engine can manage one or more virtual machine instances in the virtualization environment, each of the one or more virtual machine instances operative to provide virtualized resources of the one or more host systems for a compute access system coupled to the one or more host systems. | 2015-10-29 |
20150309830 | ESTIMATING MIGRATION COSTS FOR MIGRATING LOGICAL PARTITIONS WITHIN A VIRTUALIZED COMPUTING ENVIRONMENT BASED ON A MIGRATION COST HISTORY - Responsive to a hypervisor determining that insufficient local resources are available for reservation to meet a performance parameter for at least one resource specified in a reservation request for a particular logical partition managed by the hypervisor in a host system, the hypervisor identifies another logical partition managed by the hypervisor in the host system that is assigned at the least one resource meeting the performance parameter specified in the reservation request. The hypervisor estimates a first cost of migrating the particular logical partition and a second cost of migrating the another logical partition to at least one other host system communicatively connected in a peer-to-peer network based on at least one previously recorded cost stored by the host system of migrating a previous logical partition to the at least one other host system. | 2015-10-29 |