34th week of 2015 patent applcation highlights part 43 |
Patent application number | Title | Published |
20150234598 | MEMORY DEVICE AND HOST DEVICE - According to one embodiment, a memory device includes a nonvolatile semiconductor memory having physical storage areas that includes a user area externally accessible and are divided into management units and a control unit. The control unit receives a control command having a first argument to designate a sequential write area and a read command or a write command, assigns a management unit represented by an address of the read command or the write command as the sequential write area, and changes memory access control by judging whether an address of a memory access command to access the user area indicates access in the sequential write area whose size is equivalent to the management unit. | 2015-08-20 |
20150234599 | LOCATING DATA IN NON-VOLATILE MEMORY - Systems and methods presented herein provide for locating data in non-volatile memory by decoupling a mapping unit size from restrictions such as the maximum size of a reducible unit to provide efficient mapping of larger mapping units. In one embodiment, a method comprises mapping a logical page address in a logical block address space to a read unit address and a number of read units in the non-volatile memory. The method also comprises mapping data of the logical page address to a plurality of variable-sized pieces of data spread across the number of read units starting at the read unit address in the non-volatile memory. | 2015-08-20 |
20150234600 | SELECTIVE COPYING OF TRACK DATA THROUGH PEER-TO-PEER REMOTE COPY - In one embodiment, a method includes receiving a request to establish a Peer-to-Peer Remote Copy (PPRC) relationship between a primary storage system and a secondary storage system, and copying one or more data tracks of a primary storage device in the primary storage system to the secondary storage system without copying at least one other data track of the primary storage device to the secondary storage system. The one or more data tracks of the primary storage device comprise one or more data tracks of a first characteristic. Other portions of the primary storage device comprise one or more other data tracks of a second characteristic. | 2015-08-20 |
20150234601 | COMMAND QUEUING - The present disclosure includes apparatuses and methods for command queuing. A number of embodiments include receiving a queued command request at a memory system from a host, sending a command response from the memory system to the host that indicates the memory system is ready to receive a command in a command queue of the memory system, and receiving, in response to sending the command response, a command descriptor block for the command at the memory system from the host. | 2015-08-20 |
20150234602 | DATA STORAGE DEVICE FOR FILTERING PAGE IN TWO STEPS, SYSTEM INCLUDING THE SAME, AND METHOD OF OPERATING THE SAME - A data storage device includes a filter, a central processing unit (CPU), a first memory configured to store a page, a second memory, and a page type analyzer configured to analyze a type of the page output from the first memory and to transmit an indication signal to the CPU according to an analysis result. According to control of the CPU that operates based on the indication signal, the filter passes the page to the second memory or filters each row in the page, and transmits first filtered data to the second memory. | 2015-08-20 |
20150234603 | MEMORY DEVICE WITH VARIABLE TRIM PARAMETERS - A memory device comprising a memory array comprising a plurality of memory cells, two or more fuses coupled to the memory array, wherein each of the two or more fuses contains trim data for the memory array and a mode register for selecting one of the two or more fuses to be enabled. | 2015-08-20 |
20150234604 | STORAGE SYSTEM AND ACCESS CONTROL METHOD THEREOF - A storage system and an access control method thereof are provided. The storage system receives a first I/O request from at least one hypervisor. The first I/O request is used for accessing a first disk file of disk files. The storage system then operates a first I/O operation of a first virtual disk of virtual disks according to the first I/O request since the disk files correspond to the virtual disks. The storage system reads a QoS data of the first disk file and determines a first delay period according to the QoS data. The storage system transmits a first I/O response to the at least one hypervisor after the first delay period. | 2015-08-20 |
20150234605 | MEMORY SYSTEM AND METHOD OF CONTROLLING MEMORY SYSTEM - According to one embodiment, a memory system is provided wherein an interruption generating unit generates an interruption signal for one or more commands executed by a transfer executing unit when an end number counter is greater than or equal to a first threshold. A transfer type conjecturing unit determines whether the transfer type of a first command to be executed after transmitting the interruption signal is sequential transfer or random transfer and sets the first threshold at a value different between when determining being the sequential transfer and when determining being the random transfer. | 2015-08-20 |
20150234606 | STORAGE DEVICE FOR PERFORMING IN-STORAGE COMPUTING OPERATIONS, METHOD OF OPERATION THE SAME, AND SYSTEM INCLUDING THE SAME - A storage device performs in-storage computing operation, and includes a non-volatile memory configured to store data and a controller. The controller may include an on-chip memory and may control an operation of the non-volatile memory. The controller receives a data processing code generated by a host, overlays the data processing code on the on-chip memory, processes first data corresponding to the data processing code among the data stored in the non-volatile memory, and transmits the processed first data to the host. | 2015-08-20 |
20150234607 | DISK DRIVE AND DATA SAVE METHOD - According to one embodiment, a disk drive includes a data-save control unit configured to, when a decrease of power is detected, save data in a volatile memory to a non-volatile memory using a backup power source. The disk drive further includes a command processing unit configured to, when new data is stored in the volatile memory, when a data amount of unsaved data, which has not been saved in a disk media memory, exceeds a backup data amount that can be saved from the volatile memory to the non-volatile memory, save the unsaved data to the disk media memory. | 2015-08-20 |
20150234608 | METHOD AND SYSTEM FOR GROUP REPLICATION AND SHIPPING OF DIGITAL MEDIA - A method for replicating content onto storage devices for shipping to a common destination commences by first grouping work orders that specify replication of content onto a plurality of storage devices associated a common destination into a group replication job. The content specified in the group replication job is replicated onto the plurality of storage devices content specified in the group replication job. Thereafter, the content storage devices now replicated with content are readied for shipping to the common destination. | 2015-08-20 |
20150234609 | METHOD FOR ACCESSING FLASH MEMORY AND ASSOCIATED CONTROLLER AND MEMORY DEVICE - The present invention provides a method for accessing a flash memory, wherein the flash memory is a Triple-Level Cell flash memory and each word line of the flash memory constitutes a least significant bit (LSB) page, a central significant bit (CSB) page and a most significant bit (MSB) page, each storage unit of each word line of the flash memory is implemented by a floating-gate transistor, and each storage unit supports at least eight write voltage levels, the method includes: generating dummy data according to data of a first page and a second page corresponding to a specific word line of the flash memory, wherein the dummy data is going to be written in a third page corresponding to the specific word line; and writing the data and the dummy data into the flash memory. | 2015-08-20 |
20150234610 | ALL-IN-ONE DATA STORAGE DEVICE INCLUDING INTERNATIONL HARDWARE FILTER, METHOD OF OPERATING THE SAME, AND SYSTEM INCLUDING THE SAME - A data storage device includes a central processing unit (CPU) executing an application and a hardware filter. A method of operation the data storage device may include initializing the hardware filter based on initialization information corresponding to a changed application when the application is changed so that the hardware filter supports the changed application, filtering read data that is output from a second memory based on filtering condition data, outputting the filtered data using the hardware filter that has been initialized, and transmitting the filtered data to a host via a first memory. | 2015-08-20 |
20150234611 | LOCAL AREA NETWORK FREE DATA MOVEMENT - Systems and methods for backing up data associated with storage area network (SAN) data stores connected to a backup device over a SAN such that the backup is performed without using a local area network (LAN). The systems and methods include receiving a snapshot of a virtual machine (VM), the VM being associated with a VM datastore disk, which is further associated with a unique ID. The unique ID associated with the VM datastore disk is compared with a unique ID associated with a disk available on the computing device. When the unique ID associated with the VM datastore disk matches the unique ID associated with the disk on the computing device, the disk on the computing device with the matching unique ID is opened for reading, and data from the opened disk is copied to a copy data storage pool over a storage area network. | 2015-08-20 |
20150234612 | Multiprocessor System with Independent Direct Access to Bulk Solid State Memory Resources - A system has a collection of central processing units. Each central processing unit is connected to at least one other central processing unit and has a path into flash memory resources. A central processing unit supports a mapping from a data address space, to a flash memory virtual address space, to a flash memory virtual page number to a flash memory physical address space. | 2015-08-20 |
20150234613 | LEVEL PLACEMENT IN SOLID-STATE MEMORY - Methods and apparatus are provided for determining level placement in q-level cells of solid-state memory, where q>2. A group cells is read, where each cell is programmed to a respective programming level, at a series of time instants to obtain a sequence of read metric values for that cell. Statistical data as a function of time for each level is derived by processing the sequence of read metric values for the group of cells. At least one parameter of a model defining variation with time of the statistical data is determined. Calculating a set of q programming levels which has a pre-determined property over time based on a variation of the parameter as a function of level and the model. | 2015-08-20 |
20150234614 | File Processing Method and Apparatus, and Storage Device - A file processing method and a storage device for storing a file in a redundant array of independent disks (RAID) are disclosed. In this method, the storage device divides received F files into multiple data blocks, and obtains a first matrix with T rows according to the multiple data blocks. Data blocks belonging to one file are located in one row of the first matrix. The storage device then writes a stripe, which consists of data blocks in each column in the first matrix and a check block that is obtained by computing according to the data blocks in the column, into the RAID. Using the file processing method, the storage device can write one file into one disk of the RAID while ensuring security of file storage, thereby achieving a better energy saving effect when the file is read. | 2015-08-20 |
20150234615 | DATA PROCESSING DEVICE AND DATA PROCESSING METHOD - Embodiments of the present invention provide a data processing device and a data processing method. In the data processing device and the data processing method provided by the embodiments of the present invention, first data in a memory is written into a first non-volatile storage unit in a log file form, and a log file of the first data written into the first non-volatile storage unit is written into a second non-volatile storage unit. Because a data write speed of the first non-volatile storage unit is higher than a data write speed of the second non-volatile storage unit, fast backup of the data in the memory can be achieved, and when the data in the memory is lost in an abnormal situation, security of the data in the memory can be ensured. | 2015-08-20 |
20150234616 | SYSTEM AND METHOD FOR PROVIDING LONG-TERM STORAGE FOR DATA - A system for storing files comprises a processor and a memory. The processor is configured to break a file into one or more segments; store the one or more segments in a first storage unit; and add metadata to the first storage unit so that the file can be accessed independent of a second storage unit, wherein a single namespace enables access for files stored in the first storage unit and the second storage unit. The memory is coupled to the processor and configured to provide the processor with instructions | 2015-08-20 |
20150234617 | METHOD AND APPARATUS FOR VIRTUAL MACHINE LIVE STORAGE MIGRATION IN HETEROGENEOUS STORAGE ENVIRONMENT - Embodiments pertain to live storage migration for virtual machines. Specific embodiments can implement the migration of VM disk images without service interruption to the running workload. Specific embodiments relate to storage migration between different disk arrays. Embodiments of the subject invention relate to a method and apparatus that can enhance the efficiency of virtual machine (VM) live storage migration in heterogeneous storage environments from a multi-dimensional perspective, e.g., user experience, device wearing, and/or manageability. Specific embodiments utilize one or more of the following: adaptive storage migration strategies, or techniques, such as 1) Low Redundancy (LR), which generates a reduced, and preferably the least, amount of redundant writes; 2) Source-based Low Redundancy (SLR), which can help keep a desirable balance between IO performance and write redundancy; and 3) Asynchronous IO Mirroring (AIO), which seeks high, and preferably the highest, IO performance. Specific embodiments adaptively mix one or more of these adaptive storage migration techniques during massive VM live storage migration. | 2015-08-20 |
20150234618 | STORAGE MANAGEMENT COMPUTER, STORAGE MANAGEMENT METHOD, AND STORAGE SYSTEM - When executing an instruction for changing a configuration of a virtual storage apparatus during an inter-enclosure data migration of storage apparatuses that provide the virtual storage apparatus, an appropriate command is issued to an appropriate storage apparatus. | 2015-08-20 |
20150234619 | METHOD OF STORING DATA, STORAGE SYSTEM, AND STORAGE APPARATUS - A method of storing data using a first storage apparatus, a second storage apparatus, and a third storage apparatus coupled with each other through a network, includes as followings. The first storage apparatus receives a processing request of first data and second data. The first storage apparatus includes the first data as data to be addressed to the second storage apparatus, and the second data as data to be addressed to the third storage apparatus in one packet. The first storage apparatus transmits the one packet to the second storage apparatus. After transmitting the first data and the second data, the second storage apparatus transmits the second data to the third storage apparatus. | 2015-08-20 |
20150234620 | Printing Device, Reading System, and POS System - A printing device | 2015-08-20 |
20150234621 | PRINT SYSTEM, PRINT SERVER, CONTROL METHOD THEREOF, AND PROGRAM - A client terminal transmits a request to a printing apparatus for a registration web page for registering, in a print server, a printing apparatus used in a print service provided by the print server. The printing apparatus collects configuration information of the printing apparatus in response to reception of the request, and creates link information which contains the collected configuration information and is used to access the print server. The printing apparatus then generates a registration web page containing the created link information, and transmits it to the client terminal. The print server receives the configuration information of the printing apparatus transmitted from the client terminal via the registration web page transmitted to the client terminal. The print server creates printing apparatus information which associates the configuration information with user information of the user of the client terminal, and manages it in a storage medium. | 2015-08-20 |
20150234622 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM - Information processing apparatuses, control methods and storage mediums are provided, which may transmit or attempt to transmit data to a target device by using network information, modify a MAC address in the network information to a MAC address of a device having same device identification information as the target device when a communication with the target device is not established, and, when the MAC address is modified, transmit the data to the device having the same device identification information as the target device. In one or more embodiments, a network port monitor may transmit an inquiry notification by way of broadcast when data is not transmitted to the target device. When a response to the inquiry notification is received from a device having same device identification information as the target device, the device having the same device identification information is set as a new connection or transmission destination. | 2015-08-20 |
20150234623 | PRINT-COMMAND SUPPORT DEVICE AND NON-TRANSITORY COMPUTER READABLE MEDIUM - A print-command support device includes an accepting unit, an authenticating unit, a receiving unit, and a processing unit. The accepting unit accepts user information related to a user of a printing apparatus. The authenticating unit authenticates the user based on the user information. The receiving unit receives request information from a print command device, which displays information related to printing apparatuses and selects one apparatus therefrom to command the apparatus to perform printing. The request information is for requesting transmission of the information related to the apparatus. The processing unit performs a transmission or determination process in accordance with reception of the request information. The transmission process is for transmitting the apparatus-related information and information related to authentication by the authenticating unit to the print command device. The determination process is for determining whether to transmit the apparatus-related information to the print command device based on the authentication-related information. | 2015-08-20 |
20150234624 | USER AUTHENTICATION SYSTEM - A portable terminal includes a first sound output unit and a first sound input unit, and an image forming apparatus includes a second sound output unit, a second sound input unit, an authentication information generating unit generating a security code, a storage unit storing the generated security code, and an authentication confirmation unit performing a user authentication by using the security code. When the first sound input unit receives a first synthesized sound, which includes the stored security code and is outputted from the second sound output unit, the first sound output unit outputs a second synthesized sound including the security code extracted from the received first synthesized sound, and the authentication confirmation unit determines that a user authentication is successful when the stored security code and the security code extracted from the received second synthesized sound match. | 2015-08-20 |
20150234625 | Simulation of Preprinted Forms - In one embodiment, a method for the simulation of preprinted forms is disclosed. The method includes receiving a first image as a back drop of a form, the image including a plurality of printable features corresponding to positions of the image. A second image is received as data to be filled in to the form, the second image including a second plurality of printable features corresponding to positions of the image, wherein the second plurality of printable features each have an assigned ink transparency. A feature of the first image is blended with a corresponding feature of the second image based on the assigned ink transparencies to form a blended feature. The blended features are combined to form a blended image that blends the first and the second images and is suitable for printing. | 2015-08-20 |
20150234626 | Client Device Using a Web Browser to Control a Periphery Device Via a Printer - A device control system has a terminal | 2015-08-20 |
20150234627 | DISPLAY APPARATUS, PRINT CONTROL METHOD, AND PROGRAM - A display apparatus displays an identification information element corresponding to an image currently displayed on a display unit in accordance with a first instruction from a user. In accordance with a second instruction from the user, the display apparatus causes a printing apparatus to perform printing based on an image data element corresponding to an identification information element selected by the user from among one or more identification information elements being displayed. | 2015-08-20 |
20150234628 | ePOS Printing - A script language compatible with HTML is used to define methods or objects capable of communicating directly with an intelligent module for printing operations without going through a web browser's print selection option. A print API library provides the needed methods/objects for embedding into a web page. The intelligent module may be a stand-alone electronic device, or may be an intelligent device incorporated into a printer. The intelligent module may manage multiple printers directly or through a network, and it functions to provide a communication bridge for translating/conveying communication between the print APIs on a web page and a target printer. The print API knows the fixed IP address of the intelligent module, and define a print document or print commands and send it directly to the intelligent module by means of the known IP address. | 2015-08-20 |
20150234629 | PORTABLE DEVICE AND METHOD FOR CONTROLLING THE SAME - A method for controlling a portable device including first and second display units at opposing surfaces of the portable device. The method includes detecting one of a first unlock command for switching a state of the first display unit to an active state and maintaining a state of the second display unit in a locked state or a second unlock command for switching the state of the first display unit to the active state and switching the state of the second display unit to a ready-to-activate state; switching the state of the first display unit to the active state and switching the state of the second display unit to the ready-to-activate state when the second unlock command is detected; detecting an unlock trigger for switching the second display unit, which is in the ready-to-activate state, to the active state; and switching the second display unit, which is in the ready-to-activate state, to the active state according to the detected unlock trigger. | 2015-08-20 |
20150234630 | DETACHABLE MACHINE CONTROL PANEL WITH DISPLAY - There is provided a machine control panel with a display connected to a numerical controller of a machine tool. A part of a display section and an operation section are detachable from a main body. When a detachable section configured from the part of the display section and the operation section is detached from the main body, the detachable section is usable as a portable control panel. On the other hand, in a state in which the detachable section is attached to the main body, the display section and another display section fixed to the main body are combined as one screen to perform display. | 2015-08-20 |
20150234631 | SYNCHRONOUS DISPLAY METHOD OF SPLICED DISPLAY SCREEN, AND TIMING CONTROLLER AND SPLICED DISPLAY SCREEN USING THE SAME - The disclosure provides a synchronous display method of an spliced display screen which comprises at least two spliced display units and at least two timing controllers respectively corresponding to the spliced display units, wherein the method comprises steps of: receiving, by each timing controller, a timing control signal for a current frame of the corresponding spliced display unit, feedback from the spliced display unit corresponding to the timing controller; determining, by each timing controller, a phase difference between the timing control signal for the current frame of the corresponding spliced display unit and a reference timing control signal received by the timing controller; judging, by each timing controller, whether or not the phase difference goes beyond a predetermined threshold range; if it is judged that the phase difference goes beyond the predetermined threshold range, generating a phase adjustment value, by the timing controller, based on the phase difference, wherein the phase adjustment value is less than the phase difference; generating, by each timing controller, a next timing control signal for a next frame of the corresponding spliced display unit, based on the phase adjustment value, so that a next phase difference between the next timing control signal for the next frame and the reference timing control signal is the phase adjustment value; and outputting the next timing control signal for the next frame to the corresponding spliced display unit. Meanwhile, the disclosure also provides a timing controller used in this synchronous display method and a spliced display screen to which this synchronous display method is applied. | 2015-08-20 |
20150234632 | MULTI-PROCESSOR VIDEO PROCESSING SYSTEM AND VIDEO IMAGE SYNCHRONOUS TRANSMISSION AND DISPLAY METHOD THEREIN - The present invention relates to the field of video processing. Disclosed are a multi-processor video processing system and video image synchronous transmission and display method therein. Via PCIE bus technology, synchronous transmission and tiled display of video image in the multi-CPU system is implemented. In the present invention, the multi-processor system includes multiple processors that are connected via a PCIE bus and each comprises a display unit and a decoding unit; a memory area of the display unit comprises two buffers, a read information packet containing a read flag, and a write information packet containing a write flag. The method includes the following steps: a decoding unit generating a frame of uncompressed image, performing the following steps on each corresponding display unit: if the read and write flags corresponding to the display unit are equal, sending, by calling the PCIE bus or using local transmission, the image to an idle buffer indicated by the write flag, and negating the write flag; each display unit querying for corresponding read and write flags according to a display refresh frequency; if the read and write flags are not equal, using the buffer indicated by the read flag as a storage area for data to be displayed next time, and setting the read flag to a write flag value. | 2015-08-20 |
20150234633 | Methods and Systems for Voice Management - Methods and systems for voice management are provided. First, data is detected by a proximity sensor, and data is detected by an attitude sensor. When data detected by the proximity sensor indicates that a presence of an object is detected, and data detected by the attitude sensor indicates a specific attitude, a voice management process, such as a voice playback process or a voice recording process is performed. | 2015-08-20 |
20150234634 | MULTIPLE NETWORKING IN AUDIO PROCESSING SYSTEM - An audio processing system includes a console (control device), an engine (processing device) and an I/O unit (input/output device). The console and the engine are communicatively interconnected via a first-type network. The engine and the I/O unit are communicatively interconnected via a second-type network. When the console is connected to the first-type network, the console can remote-control the engine and the I/O unit. However, when the console is connected to the second-type network, the remote-control, by the console, on the engine and the I/O unit is invalidated. When the I/O unit is connected to the second-type network, audio signals can be input/output to/from the I/O unit. When the I/O unit is connected to the first-type network, input/output of audio signal in the I/O unit is invalidated. The present invention permits efficient use of the networks with the console using the first-type network and the I/O unit using the second-type network. | 2015-08-20 |
20150234635 | TRACKING RECITATION OF TEXT - For tracking a recitation of text, a method is disclosed that includes displaying, by use of a processor, a segment of text, receiving an audio signal, and indicating a position in the segment of text determined by a correspondence between the audio signal and the segment of text. | 2015-08-20 |
20150234636 | SYSTEMS AND METHODS USING AUDIO INPUT WITH A MOBILE DEVICE - Systems and methods of using audio input with a mobile device. A method comprises receiving content at a mobile device; annotating, at the mobile device, a selectable item of a user interface associated with the content with an annotation; receiving, at the mobile device, an audio input associated with the annotation; converting, at the mobile device, the audio input into a user interface command; and causing the mobile device to perform a function in response to the user interface command. | 2015-08-20 |
20150234637 | METHOD FOR CREATING BINARY CODE AND ELECTRONIC DEVICE THEREOF - A method for creating a binary code in an electronic device is provided, which includes operations of confirming an image resource for an application, based on a request for creating a binary code for the application; determining an attribute for the image resource; selectively converting the image resource into a compressed texture, based on the attribute; and, if the image resource is converted, creating the binary code for the application, based on the converted image resource. | 2015-08-20 |
20150234638 | APPLYING CODING STANDARDS IN GRAPHICAL PROGRAMMING ENVIRONMENTS - Graphical programming or modeling environments in which a coding standard can be applied to graphical programs or models are disclosed. The present invention provides mechanisms for applying the coding standard to graphical programs/models in the graphical programming/modeling environments. The mechanisms may detect violations of the coding standard in the graphical model and report such violations to the users. The mechanisms may automatically correct the graphical model to remove the violations from the graphical model. The mechanisms may also automatically avoid the violations in the simulation and/or code generation of the graphical model. | 2015-08-20 |
20150234639 | System and Method for Creating a Development and Operational Platform for Mobile Applications - The present invention provides a system and method for constructing a complete definition of a backend requirements model that can be automatically accessed and interpreted, and generated into a mobile consumable API for creation of, and use with, mobile applications. The mobile consumable API can be provided and made available to mobile app developers on a separate, stand-alone platform, and may act as an intermediary between the mobile app and the primary mainframe/enterprise/back end system. The method may include identification and definition of one or more of information providers, integration providers, and system behaviors, and creating a domain model. The domain model may be automatically codified into an API based solution as the app/mainframe interface, and stored on a development and operational platform for use. | 2015-08-20 |
20150234640 | System and Method for Isolating I/O Execution via Compiler and OS Support - Embodiments are provided for isolating Input/Output (I/O) execution by combining compiler and Operating System (OS) techniques. The embodiments include dedicating selected cores, in multicore or many-core processors, as I/O execution cores, and applying compiler-based analysis to classify I/O regions of program source codes so that the OS can schedule such regions onto the designated I/O cores. During the compilation of a program source code, each I/O operation region of the program source code is identified. During the execution of the compiled program source code, each I/O operation region is scheduled for execution on a preselected I/O core. The other regions of the compiled program source code are scheduled for execution on other cores. | 2015-08-20 |
20150234641 | EXECUTION CONTROL METHOD AND INFORMATION PROCESSING APPARATUS - While a first code, in an object code generated from a source code, for a loop included in the source code or a second code in the object code is executed, a feature amount concerning the number of times that a condition of a conditional branch is true is obtained. The loop includes the conditional branch, and the conditional branch is coded in the first code. The second code is a code to perform computation of a branch destination for a case where the condition of the conditional branch is true, only for loop indices that were extracted as the aforementioned case. Then, a processor executes, based on the feature amount, the second code or a third code included in the object code. The third code is a code to write, by using a predicated instruction and into a memory, any computation result of computations of branch destinations. | 2015-08-20 |
20150234642 | User Interfaces of Application Porting Software Platform - User interfaces of a software platform that generates transformed code from source code enable interaction with the codes. In various embodiments, the software platform may store the source code and the transformed code in a data store. The transformed code is a transformation of the source code by at least one business semantic preserving code transform. The at least one business semantic preserving transform causes an execution of the transformed code in a new execution scenario to produce an identical semantic effect as an execution of the source code in an old execution scenario. Subsequently, the software platform may cause a display of a user interface of the application on a display device. The user interface may provide one or more user command items for manipulating at least one of the source code or the transformed code stored in the data store. | 2015-08-20 |
20150234643 | IDENTIFYING SINGLETON CLASSES - A compiler system analyzes source code for an application. The compiler system determines whether a class in the source code uses a singleton pattern even though the class is not defined as singleton class. The compiler system may optionally convert the class to a singleton class. The compiler system may also perform one or more optimizations when generating the application based on the source code. | 2015-08-20 |
20150234644 | AUTOMATIC COLLECTION AND PROVISIONING OF RESOURCES TO MIGRATE APPLICATIONS FROM ONE INFRASTRUCTURE TO ANOTHER INFRASTRUCTURE - Technologies are provided for maintenance of resources in the migration of applications between datacenters through use of a migration module. In some examples, the migration module may collect information such as, by way of example, service provisions from a source datacenter in an idle state and in one or more active states of the applications being migrated. The migration module may test to ensure the collected information is accurate and may provide a mechanism for a customer to re-adjust the collected service provisions. Information may be packaged with the application and moved to the destination datacenter. In the destination datacenter, the migration module may collect the service provisions, build a model, and determine the successful deployability of the application. Once the migration module tests the deployment multiple times for each application state, the destination datacenter may be re-provisioned until the required service provisions are met. | 2015-08-20 |
20150234645 | SUGGESTIONS TO INSTALL AND/OR OPEN A NATIVE APPLICATION - A system and method are provided for providing suggestions to install native applications, the method including accessing a website on an application running on an electronic device, the website comprising metadata, obtaining, from the metadata, a unique identifier of a native application for downloading from a server, transmitting, to a server, a request for identifying information of the native application, the request including the obtained unique identifier, receiving, from the server and in response to the transmitting, the identifying information, displaying within a user interface at least part of the identifying information and a graphical component for installing the native application, receiving user selection of the graphical component, and initiating, in response to receiving the user selection, an inline installation of the native application between the server and the electronic device. | 2015-08-20 |
20150234646 | Method for Installing Security-Relevant Applications in a Security Element of a Terminal - A method is provided for installing a security-relevant portion of an application made available by an application provider in a security element of a terminal. The terminal requests the application from the application provider and receives the application. Subsequently, the received security-relevant portion of the application is transmitted to a trustworthy instance administrating the security element. The trustworthy instance subsequently installs the security-relevant portion of the application in the security element. | 2015-08-20 |
20150234647 | Upgrade Package Generation Method And Device, Dynamic File Differential Upgrade Method And Terminal - An upgrade package generation method and device, a dynamic file differential upgrade method and terminal, wherein, the upgrade package generation method includes: generating a dynamic file upgrade package according to dynamic files which need to be upgraded, wherein the dynamic file upgrade package comprises file name information, path information, and upgrade content information of each dynamic file which needs to be upgraded; and packing the dynamic file upgrade package into an upgrade. An upgrade package generation method and device, a dynamic file differential upgrade method and terminal, wherein, the upgrade package generation method includes: generating a dynamic file upgrade package according to dynamic files which need to be upgraded, wherein the dynamic file upgrade package comprises file name information, path information, and upgrade content information of each dynamic file which needs to be upgraded; and packing the dynamic file upgrade package into an upgrade package. | 2015-08-20 |
20150234648 | FIRMWARE MANAGEMENT SYSTEM, METHOD, AND RECORDING MEDIUM STORING PROGRAM - A firmware management device according to an exemplary aspect of the invention includes: a first control unit that acquires first update information from a distribution server that makes accessible the first update information indicating that a firmware is under release suspension; and a second control unit that stores second update information, in which the first control unit transmits a command for applying or deleting an update program of the firmware on the basis of the first update information and the second update information stored in the first storage unit. | 2015-08-20 |
20150234649 | INFORMATION PROCESSING APPARATUS, SET VALUES UPDATE METHOD FOR THE SAME, AND RECORDING MEDIUM - An information processing apparatus includes: a program receiver that receives a new program externally, the new program for updating of an existing program; a set values receiver that receives new set values externally, the new set values including version information, the version information identifying the version of program linked to the new set values; a version judgment portion that judges whether or not the version information included in the new set values matches a current program currently installed or the new program to be installed, the new program being received by the program receiver; and an update portion that updates all the set values if the version information matches the current or new program, or only some of the set values if the version information does not match the current or new program. | 2015-08-20 |
20150234650 | METHOD OF MANAGING FIRMWARE AND ELECTRONIC DEVICE THEREOF - A method for managing firmware and an electronic device are provided. The method includes updating the firmware of the internal device using firmware update information of the internal device, which is logically or physically separated from a kernel (OS) image. | 2015-08-20 |
20150234651 | MANAGING DEPLOYMENT OF APPLICATION PATTERN BASED APPLICATIONS ON RUNTIME PLATFORMS - A method for managing application patterns. Service application programming interfaces required for use by an application on a runtime platform are provisioned. The application is based on an application pattern. Deployment information for deploying the application on the runtime platform is generated. The deployment information includes values for properties of the application pattern for configuring the application on the runtime platform. The deployment information is used to deploy the application on the runtime platform. In response, the runtime platform runs the application with the application using the service application programming interfaces previously provisioned for use by the application on the runtime platform. | 2015-08-20 |
20150234652 | TECHNIQUES TO IDENTIFY AND PURGE UNUSED CODE - Techniques to identify and purge unused code are described. In one embodiment, for example, an apparatus may comprise a processor circuit on a device and a storage component configured to store a codebase including one or more portions of programming code. The apparatus may further comprise a sampling component, a profiling component, and a purge component. The sampling component may be operative on the processor circuit to sample the codebase and generate one or more leads identifying portions of programming code from the codebase determined to be unused during a sampling period. The profiling component may be operative on the processor circuit to receive the one or more leads and profile programming code identified therein during a profiling period. The profiling component may be further operative on the processor circuit to identify portions of programming code determined to be unused during the profiling period. The purge component may be operative on the processor circuit to receive identification of the portions of programming code determined to be unused during the profiling period and initiate a purging process thereon. Other embodiments are described and claimed. | 2015-08-20 |
20150234653 | RESOURCE DEPLOYMENT BASED ON CONDITIONS - Architecture that facilitates the package partitioning of application resources based on conditions, and the package applicability based on the conditions. An index is created for a unified lookup of the available resources. At build time of an application, the resources are indexed and determined to be applicable based on the conditions. The condition under which the resource is applicable is then used to automatically partition the resource into an appropriate package. Each resource package then becomes applicable under the conditions in which the resources within it are applicable, and is deployed to the user if the user merits the conditions (e.g., an English user will receive an English package of English strings, but not a French package). Before the application is run, the references to the resources are merged and can be used to do appropriate lookup of what resources are available. | 2015-08-20 |
20150234654 | INTEGRATED DEVELOPMENT ENVIRONMENT-BASED REPOSITORY SEARCHING IN A NETWORKED COMPUTING ENVIRONMENT - Embodiments of the present invention provide an approach for integrated development environment (IDE)-based repository searching (e.g., for library elements such as classes and/or functions) in a networked computing environment. In a typical embodiment, a first program code file is received from a first integrated development environment (IDE). The first program file may be associated with a set of attributes as stored in an annotation, header, or the like. Regardless, the first program file may be parsed and indexed into a repository based on the set of attributes. A search request may then be received from a second IDE. Based on the search request and the set of attributes, a matching program code file may then be identified as stored in the repository. Once identified, the matching program code file may be transmitted/communicated to the second IDE to fulfill the search request. | 2015-08-20 |
20150234655 | ATOMIC MEMORY OPERATIONS ON AN N-WAY LINKED LIST - Computer-implemented methods for pushing or popping an element on to of off of an N-way linked list in a computer memory may include one or more atomic memory operations on a handle of the N-way linked list. One embodiment for pushing a first element on to an N-way linked list may include setting a next sequential element pointer of the first element to point to an unknown location marker. Another embodiment for popping a first element off of an N-way linked may include marking a sub-list tail handle with a designation indicating that the particular sub-list is involved in a pop process. In yet another embodiment, a method for popping a first element off of an N-way linked list may include storing in a sub-list tail handle a pointer to a pseudo element. The handle may fit within a single line of cache memory. | 2015-08-20 |
20150234656 | VECTOR PROCESSOR, INFORMATION PROCESSING APPARATUS, AND OVERTAKING CONTROL METHOD - The frequency by which an overtaking process is performed is improved, so that memory access performance is improved. A vector processor | 2015-08-20 |
20150234657 | LATEST PRODUCER TRACKING IN AN OUT-OF-ORDER PROCESSOR, AND APPLICATIONS THEREOF - A processor and system for latest producer tracking In one embodiment, the processor includes an operand renamer circuit that includes a register rename map, a producer tracking circuit that includes a producer tracking map, and a results buffer allocater circuit that includes a results buffer free list. Control logic modifies in-register status values stored in the register rename map based on producer tracking status values stored in the producer tracking map. The producer tracking status values stored in the producer tracking map are modified based on buffer identification values output by the results buffer allocater circuit. | 2015-08-20 |
20150234658 | LSI AND LSI MANUFACTURING METHOD - An LSI includes an address decoder in which combinations of IP cores and control registers simultaneously accessed according to an operation mode signal are set in advance, so that the plurality of control registers can be accessed with a single system address signal. Therefore, it is unnecessary that the CPU is provided with selection signals whose number is equal to that of the combinations of the control registers. This reduces coding work for operating CPU, reducing work in developing a program of the CPU. | 2015-08-20 |
20150234659 | APPARATUS AND METHOD FOR ASYMMETRIC DUAL PATH PROCESSING - According to embodiments disclosed herein, there is disclosed a computer processor architecture; and in particular a computer processor, a method of operating the same, and a computer program product that makes use of an instruction set for the computer. In one embodiment, the computer processor includes: (1) a decode unit for decoding instruction packets fetched from a memory holding the instruction packets, (2) a control processing channel capable of performing control operations and (3) a data processing channel capable of performing data processing operations, wherein, in use the decode unit causes instructions of instruction packets comprising a plurality of only control instructions to be executed sequentially on the control processing channel, and wherein, in use the decode unit causes instructions of instruction packets comprising a plurality of instructions comprising at least one data processing instruction to be executed simultaneously on the data processing channel. | 2015-08-20 |
20150234660 | PROCESSOR-CACHE SYSTEM AND METHOD - A digital system is provided. The digital system includes an execution unit, a level-zero (L0) memory, and an address generation unit. The execution unit is coupled to a data memory containing data to be used in operations of the execution unit. The L0 memory is coupled between the execution unit and the data memory and configured to receive a part of the data in the data memory. The address generation unit is configured to generate address information for addressing the L0 memory. Further, the L0 memory provides at least two operands of a single instruction from the part of the data to the execution unit directly, without loading the at least two operands into one or more registers, using the address information from the address generation unit. | 2015-08-20 |
20150234661 | SEMICONDUCTOR INTEGRATED CIRCUIT DEVICE AND SYSTEM USING THE SAME - A processor system, includes a first central processing unit (CPU) that executes a redundant instruction set; and a second CPU that executes the redundant instruction set, wherein before the second CPU executes a redundant instruction among the redundant instruction set, the first CPU is able to execute n (n is a predetermined integer number) redundant instructions among the redundant instruction set, and wherein when an exception occurs during execution of the redundant instruction set in the first CPU, the first CPU executes an instruction for the exception as a non-redundant instruction. | 2015-08-20 |
20150234662 | APPARATUS FOR MUTUAL-TRANSPOSITION OF SCALAR AND VECTOR DATA SETS AND RELATED METHOD - An apparatus for processing a plurality of data sets is disclosed, wherein one data set of the plurality of data sets includes N components and has a data type of one of a scalar type and a vector type, wherein N is a positive integer number. The apparatus includes a memory module and a data accessing module. The memory module comprises N memory units configured to store the plurality of data sets. The data accessing module is configured to write the data set into the memory module according to a write data index corresponding to the data set and one of a first writing mapping information and a second writing mapping information, wherein the first writing mapping information is employed when the data type is one of the scalar and the vector type and the second writing mapping information is employed when the data type is the other of the scalar and the vector type. | 2015-08-20 |
20150234663 | Instruction and Logic for Run-time Evaluation of Multiple Prefetchers - A processor includes a cache, a prefetcher module to select information according to a prefetcher algorithm, and a prefetcher algorithm selection module. The prefetcher algorithm selection module includes logic to select a candidate prefetcher algorithm determine and store memory addresses of predicted memory accesses of the candidate prefetcher algorithm when performed by the prefetcher module, determine cache lines accessed during memory operations, and evaluate whether the determined cache lines match the stored memory addresses. The prefetcher algorithm selection module further includes logic to adjust an accuracy ratio of the candidate prefetcher algorithm, compare the accuracy ratio with a threshold accuracy ratio, and determine whether to apply the first candidate prefetcher algorithm to the prefetcher module. | 2015-08-20 |
20150234664 | MULTIMEDIA DATA PROCESSING METHOD AND MULTIMEDIA DATA PROCESSING SYSTEM USING THE SAME - A multimedia data processing method is provided which includes providing a conflict detection unit at a load/store pipeline unit; generating, by the conflict detection unit, speculative conflict information, which is used to predictively determine whether an address of a load/store instruction of a current thread causes a conflict miss before a cache access operation is performed by performing a history search for load/store instruction addresses of previous threads without referring to a cache memory; and storing information of the current thread directly in a standby buffer without an execution of the cache access operation in response to the generated speculative conflict information indicating the conflict miss. | 2015-08-20 |
20150234665 | In-Vehicle Information System, Information Terminal, and Application Execution Method - In an in-vehicle information system including a portable information terminal and an in-vehicle device, the information terminal includes: a storage unit that stores applications; a control unit that executes a start-up application; a running application determination unit that determines an application that is executed by the control unit by a predetermined time interval; a comparison unit that compares the start-up application and the running application; and a restriction information transmission unit that transmits to the in-vehicle device restriction information corresponding to contents of action regulation imposed on the running application while a vehicle is in a traveling state based upon a result of the comparison by the comparison unit. | 2015-08-20 |
20150234666 | FAST COMPUTER STARTUP - Fast computer startup is provided by, upon receipt of a shutdown command, recording state information representing a target state. In this target state, the computing device may have closed all user sessions, such that no user state information is included in the target state. However, the operating system may still be executing. In response to a command to startup the computer, this target state may be quickly reestablished from the recorded target state information. Portions of a startup sequence may be performed to complete the startup process, including establishing user state. To protect user expectations despite changes in response to a shutdown command, creation and use of the file holding the recorded state information may be conditional on dynamically determined events. Also, user and programmatic interfaces may provide options to override creation or use of the recorded state information. | 2015-08-20 |
20150234667 | DEFINING CLASSES AS SINGLETON CLASSES OR NON-SINGLETON CLASSES - One or more of the classes is defined using an attribute or keyword that indicates that the one or more classes may be defined as singleton classes or non-singleton classes (a class that may be instantiated more than once). A compiler system converts the class to a singleton class when the compiler system receives a command or request indicating that the class is to be defined as a singleton class. Various optimizations may be performed when one or more of the classes in the source code are defined as singleton classes. The compiler system may not convert the class to a singleton class when the compiler system receives a command or request indicating that the class is to be defined as a non-singleton class. | 2015-08-20 |
20150234668 | VIRTUAL MACHINE LOAD BALANCING - Exemplary methods, apparatuses, and systems include virtualization software of a host computer receiving a first packet addressed to a first virtual link layer address. Each of a first plurality of virtual machines on the first host computer is configured to share the first virtual link layer address. The virtualization software of the first host computer maps a flow of packets, including the first packet, to a first virtual machine within the first plurality of virtual machines and forwards the first packet to the first virtual machine. The virtualization software of the first host computer receives a second packet from the first virtual machine in response to the first packet. The second packet includes the first virtual link layer address as a source address for the first virtual machine. | 2015-08-20 |
20150234669 | MEMORY RESOURCE SHARING AMONG MULTIPLE COMPUTE NODES - A method includes running on multiple compute nodes respective memory sharing agents that communicate with one another over a communication network. One or more local Virtual Machines (VMs), which access memory pages, run on a given compute node. Using the memory sharing agents, the memory pages that are accessed by the local VMs are stored on at least two of the compute nodes, and the stored memory pages are served to the local VMs. | 2015-08-20 |
20150234670 | MANAGEMENT APPARATUS AND WORKLOAD DISTRIBUTION MANAGEMENT METHOD - A management apparatus deploys, when loads of one or more first virtual machines deployed on a first system satisfy a first load condition, one or more second virtual machines on a second system, and distributes processing of a business operation across the first and second virtual machines. The management apparatus allows a different second virtual machine to be added to the second system when, after the second virtual machines are deployed, the loads of the first virtual machines satisfy the first load condition and loads of the second virtual machines satisfy a second load condition. The management apparatus restricts the addition of the different second virtual machine to the second system when, after the second virtual machines are deployed, the loads of the first virtual machines satisfy the first load condition but the loads of the second virtual machines do not satisfy the second load condition. | 2015-08-20 |
20150234671 | MANAGEMENT SYSTEM AND MANAGEMENT PROGRAM - A management system is able to reduce a migration load, which is associated with a requested change in a system which receives a virtual machine generation request that designates a template. The system searches for a physical computer having physical resources included in the template, sends a resource allocation request to the searched physical computer in accordance with the received virtual machine generation request and the template, collects a resource usage status of each physical computer, receives an identifier of a generated virtual machine and a change request to change the template. Then, the system calculates a resource type and resource amount, which need to be added by changing the template, based on the collected resource usage status; and determines whether a first physical computer for controlling the generated virtual machine has the additional resource amount of the calculated resource type as an unused resource amount or not. | 2015-08-20 |
20150234672 | INFORMATION PROCESSING DEVICE THAT GENERATES MACHINE DISPOSITION PLAN, AND METHOD FOR GENERATING MACHINE DISPOSITION PLAN - The present invention provides an information processing device that outputs a machine disposition plan for moving virtual machines while suppressing deterioration in user experience performance in a virtual-machine-type thin client system. The information processing device is provided with: a connection time determination unit that, for each first virtual machine, calculates a time slot coverage rate that is the fraction of a connection time slot covered by a high-load time slot, and determines as a second virtual machine a first virtual machine of which the time slot coverage rate exceeds a minimum coverage rate; and a disposition plan generating unit that, on the basis of time-series data of the amount of resource use of each second virtual machine, calculates the number of physical servers necessary to hold the second virtual machines in each of the high-load time slot and a low-load time slot, and associates the number to the virtual machines, outputting the result. | 2015-08-20 |
20150234673 | INFORMATION PROCESSING SYSTEM - A first device starts to transfer memory data related to a virtual machine running on the first device to a second device connected to the first device via a switch device. When an accumulated amount of transferred memory data exceeds a first threshold, the first device stops packet transmission performed by the virtual machine and transmits a prior shut-down notice to the second device. The first device shuts down the virtual machine when the accumulated amount exceeds a second threshold. The second device transmits, upon receiving the prior shut-down notice, a first control message to the switch device and the first device and causes a virtual network interface to start reception of packets destined for the virtual machine. When the memory transfer is completed, the second device starts up the virtual machine to start the packet transmission and outputs packets held in the virtual network interface to the virtual machine. | 2015-08-20 |
20150234674 | Method, System and Apparatus for Creating Virtual Machine - A method, a system, and an apparatus for creating a virtual machine. The method includes receiving a virtual machine creation request to create a plurality of virtual machines; dividing the plurality of virtual machines into a plurality of virtual machine groups; determining a home physical rack for each virtual machine group, where one virtual machine group corresponds to one home physical rack; and creating each virtual machine group on the home physical rack of each virtual machine group. Because each virtual machine group is created on a home physical rack to which each virtual machine group belongs, each virtual machine group is equivalent to one physical rack. | 2015-08-20 |
20150234675 | SYSTEM AND METHOD FOR PROCESS RUN-TIME PREDICTION - Various embodiments provide process run-time prediction for processes running on server computers. In one embodiment, process run-time of a process is determined by building a database with a history of users, command lines and runtime associated with each command line, and comparing the process with stored records of completed processes in the database. In some embodiments, in response to a determination that the time interval of a process is likely to intersect a planned maintenance period on a server computer, a maintenance notification can be sent to a user of the process and therefore allow the affected process to be migrated to unaffected server computer(s). | 2015-08-20 |
20150234676 | DATA TRANSFER BUS COMMUNICATION TO RECEIVE DATA BY SENDING REQUEST INSTRUCTION ATTACHED WITH IDENTIFIER INDICATING PROCESSOR AND THREAD CONTEXT IDENTITIES - Systems and methods for managing context switches among threads in a processing system. A processor may perform a context switch between threads using separate context registers. A context switch allows a processor to switch from processing a thread that is waiting for data to one that is ready for additional processing. The processor includes control registers with entries which may indicate that an associated context is waiting for data from an external source. | 2015-08-20 |
20150234677 | DYNAMICALLY ADJUSTING WAIT PERIODS ACCORDING TO SYSTEM PERFORMANCE - A method for dynamically adjusting an actual wait period associated with an operating system call, wherein the operating system call suspends execution of at least one thread in a plurality of threads associated with an operating environment is provided. The method may include determining a utilization factor function associated with the operating environment. The method may also include selecting at least one performance counter within a plurality of performance counters associated with the operating environment. The method may further include computing a utilization factor based on the determined utilization factor function and the selected at least one performance counter. Additionally, the method may include intercepting an operating system call, wherein the operating system call includes a requested wait period parameter. The method may also include updating the actual wait period associated with the intercepted operating system call based on the requested wait period parameter and the computed utilization factor. | 2015-08-20 |
20150234678 | CONTROLLING METHOD AND ELECTRONIC DEVICE FOR PROCESSING METHOD - A method and an apparatus for controlling an electronic device are provided. The method includes driving a plurality of Operating Systems (OSs) controlling different mode states of the electronic device. A first OS among the plurality of OSs is set such that the first OS is executed in a first mode state. A second OS among the plurality of OSs is set such that the second OS is executed in a second mode state. While the first mode state is executed by the first OS, a control item executable by the second OS is displayed in the first mode state. In response to receiving an input relating to the control item, a control action corresponding to the control item is performed under the second mode state. | 2015-08-20 |
20150234679 | METHOD TO COMMUNICATE TASK CONTEXT INFORMATION AND DEVICE THEREFOR - Task context information is transferred concurrently from a processor core to an accelerator and to a context memory. The accelerator performs an operation based on the task context information and the context memory saves the task context information. The order of transfer between the processor core is based upon a programmable indicator. During a context restore operation information is concurrently provided to data bus from both the accelerator and the processor core. | 2015-08-20 |
20150234680 | TASK CONTROL DEVICE - A task control unit executes a task according to execution request information for the task, which task is registered using a function provided by task control unit. The task control unit separately controls a task (P-Task) whose execution time is set by a programmer and a task (S-Task) whose execution time is set by a system so that the tasks do not interfere with each other. The task control unit executes plural S-Tasks while complying with their execution periods. By controlling tasks by the foregoing methods, energy is saved and a network load is reduced. | 2015-08-20 |
20150234681 | INFORMATION PROCESSING SYSTEM AND MANAGEMENT METHOD OF INFORMATION PROCESSING SYSTEM - A system to which the present invention has been applied includes a plurality of information processing apparatuses connected to each other and a management device that divides a first number of pieces of management data needed for management of the plurality of information processing apparatuses into a second number of pieces of management data, the second number being equal to or greater than the first number, and that transmits the second number of pieces of management data obtained by the division respectively to the plurality of information processing apparatuses. | 2015-08-20 |
20150234682 | RESOURCE PROVISIONING SYSTEMS AND METHODS - Example resource provisioning systems and methods are described. In one implementation, an execution platform accesses multiple remote storage devices. The execution platform includes multiple virtual warehouses, each of which includes a cache to store data retrieved from the remote storage devices and a processor that is independent of the remote storage devices. A resource manager is coupled to the execution platform and monitors received data processing requests and resource utilization. The resource manager also determines whether additional virtual warehouses are needed based on the data processing requests and the resource utilization. If additional virtual warehouses are needed, the resource manager provisions a new virtual warehouse. | 2015-08-20 |
20150234683 | Object Optimal Allocation Device, Method and Program - A method, system and computer program product for optimally allocating objects in a virtual machine environment implemented on a NUMA computer system. The method includes: obtaining a node identifier; storing the node identifier in a thread; obtaining an object identifier of a lock-target object from a lock thread; writing a lock node identifier into the lock-target object; traversing an object reference graph where the object reference graph contains an object as a graph node, a reference from the first object to a second object as an edge, and a stack allocated to a thread as the root node; determining whether a move-target object contains the lock node identifier; moving the move-target object to a subarea allocated to a lock node if it contains the lock node identifier, and moving the move-target object to the destination of the current traversal target object if the lock node identifier is not found. | 2015-08-20 |
20150234684 | WORKLOAD MIGRATION BETWEEN VIRTUALIZATION SOFTWARES - A virtual machine (VM) migration from a source virtual machine monitor (VMM) to a destination VMM on a computer system. Each of the VMMs includes virtualization software, and one or more VMs are executed in each of the VMMs. The virtualization software allocates hardware resources in a form of virtual resources for the concurrent execution of one or more VMs and the virtualization software. A portion of a memory of the hardware resources includes hardware memory segments. A first portion of the memory segments is assigned to a source logical partition and a second portion is assigned to a destination logical partition. The source VMM operates in the source logical partition and the destination VMM operates in the destination logical partition. The first portion of the memory segments is mapped into a source VMM memory, and the second portion of the memory segments is mapped into a destination VMM memory. | 2015-08-20 |
20150234685 | FULL EXPLOITATION OF PARALLEL PROCESSORS FOR DATA PROCESSING - Exemplary method, system, and computer program product embodiments for full exploitation of parallel processors for data processing are provided. In one embodiment, by way of example only, a set of parallel processors is partitioned into disjoint subsets according to indices of the set of the parallel processors. The size of each of the disjoint subsets corresponds to a number of processors assigned to the processing of the data chunks at one of the layers. Each of the processors are assigned to different layers in different data chunks such that each of processors are busy and the data chunks are fully processed within a number of the time steps equal to the number of the layers. A transition function is devised from the indices of the set of the parallel processors at one time steps to the indices of the set of the parallel processors at a following time step. | 2015-08-20 |
20150234686 | EXPLOITING PARALLELISM IN EXPONENTIAL SMOOTHING OF LARGE-SCALE DISCRETE DATASETS - A system, and computer program product for large-scale data transformations. Embodiments include a smoothing engine within an R environment to configure at least one master task and at least two worker tasks. A chunk calculator receives a series of data values and divides the series of data values into portions of data values which are in turn assigned as workloads to at least two worker tasks. The worker tasks serve to calculate a first state value of a first one of the portions of data values, and calculate a second state value of a second one of the portions of data values. The workloads are selected such that calculating a second state value does not depend on the first state value. The results of the workload calculations are used to calculate a smoothing factor used to predict a trend. | 2015-08-20 |
20150234687 | THREAD MIGRATION ACROSS CORES OF A MULTI-CORE PROCESSOR - Techniques described herein are generally related to thread migration across processing cores of a multi-core processor. Execution of a thread may be migrated from a first processing core to a second processing core. Selective state data required for execution of the thread on the second processing core can be identified and can be dynamically acquired from the first processing core. The acquired state data can be utilized by the thread executed on the second processing core. | 2015-08-20 |
20150234688 | Data Management Systems And Methods - Example data management systems and methods are described. In one implementation, a method identifies multiple files to process based on a received query and identifies multiple execution nodes available to process the multiple files. The method initially creates multiple scansets, each including a portion of the multiple files, and assigns each scanset to one of the execution nodes based on a file assignment model. The multiple scansets are processed by the multiple execution nodes. If the method determines that a particular execution node has finished processing all files in its assigned scanset, an unprocessed file is reassigned from another execution node to the particular execution node. | 2015-08-20 |
20150234689 | INTERFACE COMBINING MULTIPLE SYSTEMS INTO ONE - A vehicle system includes a vehicle, a vehicle control unit (VCU), a plurality of data modules and a data conversion interface. The vehicle includes a chassis that supports the other components of the vehicle, including the VCU. The data modules each provide an output signal with a data type specific to each data module. The data conversion interface couples with the VCU and each data module. The data conversion interface receives the output signal from each data module, converts each output signal into a common data format, and transmits the converted output signals to the VCU. | 2015-08-20 |
20150234690 | IN-VEHICLE APPARATUS AND PROGRAM - An ASL is associated with an APP module having as a communication target an existing APP SW-C( | 2015-08-20 |
20150234691 | METHOD AND A SYSTEM FOR SENDING A FIRST AND SECOND MESSAGE - A system for sending a first message and a second message subsequent to the first message. The system comprises a message sender arranged to send the first message to a processor arranged to process the first message and the second message. The processor is arranged to refuse the second message until after the processor concludes transmitting a response to the first message. The message sender is further arranged to send the second message to the processor before receipt of the response to the first message and at a time for the second message to arrive at the processor after the processor concludes the sending of the response to the first message. | 2015-08-20 |
20150234692 | MEMORY MANAGEMENT METHOD, MEMORY CONTROL CIRCUIT UNIT AND MEMORY STORAGE APPARATUS - A memory management method, a memory control circuit unit using the method, and a memory storage apparatus using the method are provided. The memory management method includes determining whether a use count of the rewritable non-volatile memory module is greater than a use count threshold; based on a result of the determination, sorting each physical erasing unit in a spare area in an ascending manner according to an erasing count of each physical erasing unit in the spare area or according to the number of maximum bit errors of the physical erasing units in the spare area, so as to form a plurality of sorted physical erasing units; and selecting the foremost physical erasing unit from the spare area to write data according to the sorted physical erasing units. By applying the memory management method, the lifespan of the rewritable non-volatile memory module may be effectively prolonged. | 2015-08-20 |
20150234693 | Method and Apparatus for Soft Error Mitigation in Computers - Hardening of an integrated circuit such as a GPU processor to soft errors caused by particle strikes is applied selectively to the set of devices according to the magnitude of error resulting from this soft error for the particular device. This approach differs from approaches that protect all devices, all devices likely to produce an output error, or all devices that are vulnerable. | 2015-08-20 |
20150234694 | DETERMINING FAULTY NODES VIA LABEL PROPAGATION WITHIN A WIRELESS SENSOR NETWORK - Fault detection within wireless sensor networks can be determined by label propagation training within a wireless network system based on normal and faulty node conditions. The training information is then used to propagate node information to neighboring data vectors, which generates an indication of faulty nodes or an indication of a normal transmission path. | 2015-08-20 |
20150234695 | OPERATIONAL STATUS OF NETWORK NODES - Disclosed are various embodiments for network monitoring. A processor circuit having a processor and a memory is employed. A listing of components of a network is stored in the memory, the listing including a plurality of endpoints and a plurality of nodes. One of the endpoints includes a processor circuit. A monitoring application is stored in the memory and executable by the processor circuit. The monitoring application is configured to maintain in the memory an indication of an operational status of each of the nodes derived from a plurality of status requests transmitted between respective pairs of the endpoints. | 2015-08-20 |
20150234696 | LOAD-CONTROL BACKUP SIGNAL GENERATION CIRCUIT - In a case in which a malfunction occurs in a control processor which operates according to a predetermined program, a load-control backup signal generation circuit supplies a backup control signal to a switch of a load connected to an output of the control processor. The load-control backup signal generation circuit includes: a watchdog input terminal to which a watchdog signal periodically output from the control processor is input; a pulse count unit which counts a clock pulse generated with a constant period and which controls a count state of the clock pulse according to a signal input to the watchdog input terminal; and a signal selection unit which selects, from a plurality of options, a predetermined condition for causing a backup signal output unit to generate the backup control signal, based on a count output signal of a plurality of bits output from the pulse count unit. | 2015-08-20 |
20150234697 | LOAD-CONTROL BACKUP SIGNAL GENERATION CIRCUIT - In a case in which a malfunction occurs in a control processor which operates according to a predetermined program, a load-control backup signal generation circuit supplies a backup control signal to a switch of a load connected to an output of the control processor. The load-control backup signal generation circuit includes: a watchdog input terminal to which a watchdog signal periodically output from the control processor is input; a pulse count unit which counts a clock pulse generated with a constant period and which controls a count state of the clock pulse according to a signal input to the watchdog input terminal; and a backup signal output unit which generates the backup control signal when a count output of the pulse count unit satisfies a predetermined condition. | 2015-08-20 |