Patent application number | Description | Published |
20090116411 | Mesh Tree Formation in Wireless Networks - A mesh tree formation system. In particular implementations, a method includes responsive to a selection of a channel potentially bearing a higher priority use, entering a silent state and initiating a channel scan of the selected channel for a period of time. The method also includes, responsive to receipt of an enabling signal, entering a limited transmission state that enables transmission of wireless frames on the selected channel. The method also includes, responsive to termination of the period of time of the channel scan wherein no higher priority use is detected, entering a full transmission state comprising transmission of enabling signals corresponding to the selected channel. | 05-07-2009 |
20090201851 | Coordinated Channel Change in Mesh Networks - A coordinated channel change system. In particular implementations, a method includes receiving a prepare-to-change message, wherein the prepare-to-change message indicates instructions to prepare to change channels and includes a designated channel, and forwarding the prepare-to-change message to one or more child nodes. The method also includes receiving a ready-to-change message from the one or more child nodes, and transmitting a change-to-channel message to the one or more child nodes, wherein the change-to-channel message indicates instructions to switch to the designated channel. The method also includes receiving an acknowledgement message from the one or more child nodes, and changing to the designated channel. | 08-13-2009 |
20090252064 | SEAMLESS TREE CREATION AND MOVEMENT - In an example embodiment, a beacon is sent on all available interfaces of a device comprising data indicating the operating parameters of all interfaces of the device. A beacon containing data about the configuration of a first interface and a second interface is sent on both the first interface and the second interface. The beacon may suitably comprise data indicating the protocol, channel, and spanning trees for the interface. If communication on the primary interface becomes unavailable, the data in the beacons can be used to facilitate switching communication to the secondary interface. | 10-08-2009 |
20090252127 | BUILDING WIRELESS ROUTING STRUCTURES USING OUT OF BAND SIGNALING - In an example embodiment, an access point (AP) uses out-of-band signaling on a single non-DFS (Dynamic Frequency Selection) frequency band radio in an N-radio system to synchronize information with neighboring APs and to learn about their radio interfaces. This enables the AP to be able to acquire information about neighbor APs on different frequency bands and to build and maintain mesh routing structures while minimizing backhaul down-time. | 10-08-2009 |
20150146543 | Uplink-Based Wireless Radio Resource Management - Presented herein are techniques for using uplink transmissions from devices (e.g., wireless tags, clients, etc.) to determine a path loss between neighboring access points. In one example, a wireless controller obtains receive signal strength information of uplink transmissions received at neighboring access points in a wireless network. The wireless controller determines an effective path loss between the neighboring access points based on the receive signal strength information for the uplink transmissions received at the neighboring access points. The wireless controller also performs radio resource management operations in the wireless network using the effective path loss determined based on the uplink transmissions received at the neighboring access points. | 05-28-2015 |
Patent application number | Description | Published |
20090296683 | Transmitting a protocol data unit using descriptors - A method for transmitting a protocol data unit (PDU) includes receiving one or more Ethernet packets and reading a header associated with each of the one or more Ethernet packets. The method further includes generating a transmit descriptor that includes a plurality of fields containing information for preparing a protocol data unit (PDU), and preparing a PDU according to the transmit descriptor. The method also includes transmitting a burst, wherein the burst includes one or more PDUs. | 12-03-2009 |
20090298508 | Receiving and Processing Protocol Data Units - A method for processing one or more bursts including receiving at least a portion of a first burst comprising one or more protocol data units. The method includes receiving a sequence number for the first burst. The method includes writing the sequence number and the first burst to a physical-layer queue, such that the first burst is concatenated to the sequence number in the physical-layer queue. The sequence number may identify the first burst from one or more second bursts written to the physical-layer queue preceding or following the first burst. | 12-03-2009 |
20090300328 | Aligning Protocol Data Units - An apparatus for receiving one or more protocol data units (PDUs) from a word aligned queue including a media access control (MAC) physical-layer (PHY) coprocessor (MPC) logically residing between a physical-layer controller and a media access controller (MAC) processor. The MPC is configured to access a reception physical-layer queue storing a burst, such that the reception physical-layer queue includes a plurality of word lines. The burst includes one or more PDUs that each occupy one or more word lines of the reception physical-layer queue, such that a particular word line stores a portion of a first PDU and a portion of second PDU. The MPC is also configured to receive from the reception physical-layer queue the first PDU including the portion of the first PDU stored in the selected word line. | 12-03-2009 |
20090323584 | Method and Apparatus for Parallel Processing Protocol Data Units - An apparatus for processing a protocol data unit (PDU) includes a transmit (TX) module, a physical layer controller (PHY), and a reception (RX) module. The TX module is configured to receive a transmit descriptor that comprises a plurality of fields, wherein the fields contain information for preparing a first PDU. The PHY is configured to receive at least a portion of a second PDU into a reception (RX) physical-layer queue, the second PDU having a header. The RX module is configured to receive a receive descriptor, wherein the receive descriptor includes a plurality of fields having information for processing the second PDU. The RX module is further configured to process the second PDU according to the receive descriptor in parallel with the TX module processing the first PDU based on the transmit descriptor. | 12-31-2009 |
20090323585 | Concurrent Processing of Multiple Bursts - An apparatus includes a physical layer controller (PHY) and a media access control coprocessor (MPC). The PHY includes a plurality of physical-layer queues, wherein each of the plurality of physical-layer queues is associated with one of a plurality of bursts, wherein each of the plurality of bursts includes one or more protocol data units (PDUs). The MPC is configured to receive a first PDU associated with a first burst of the plurality of bursts and write the first PDU of the first burst to a first physical-layer queue of the plurality of physical-layer queues. The MPC is further configured to receive a second PDU and determine that the second PDU is associated with a second burst of the plurality of bursts. The MPC also being configured to write the second PDU to a second physical-layer queue, wherein the second physical-layer queue is associated with the second burst. | 12-31-2009 |
20090327716 | Verifying a Cipher-Based Message Authentication Code - A system for verifying a cipher-based message authentication code (CMAC), including a reception (RX) module logically residing between a physical layer controller (PHY) and a media access controller (MAC) processor, such that the RX module is configured to receive one or more portions of the CMAC with one or more bursts, process the one or more bursts, and write the one or more portions of the CMAC to one or more memory locations in a memory. The system also includes a transmission (TX) module logically residing between the PHY and the MAC processor, such that the TX module configured to verify the CMAC concurrently as the RX module processes the one ore more bursts. | 12-31-2009 |
20120275471 | Aligning Protocol Data Units - An apparatus for receiving one or more protocol data units (PDUs) from a word aligned queue including a media access control (MAC) physical-layer (PHY) coprocessor (MPC) logically residing between a physical-layer controller and a media access controller (MAC) processor. The MPC is configured to access a reception physical-layer queue storing a burst, such that the reception physical-layer queue includes a plurality of word lines. The burst includes one or more PDUs that each occupy one or more word lines of the reception physical-layer queue, such that a particular word line stores a portion of a first PDU and a portion of second PDU. The MPC is also configured to receive from the reception physical-layer queue the first PDU including the portion of the first PDU stored in the selected word line. | 11-01-2012 |
Patent application number | Description | Published |
20130235861 | WLAN SYSTEM SCANNING AND SELECTION - Techniques for performing WLAN system scanning and selection are described. A terminal performs multiple iterations of scan to detect for WLAN systems. A scan list containing at least one WLAN system to detect for is initially determined For each scan iteration, a scan type may be selected from among the supported scan types. The selected scan type may indicate passive scan or active scan, frequency channels to scan, etc. A scan may be performed based on the selected scan type. Signal strength measurements are obtained for access points received during the scan and used to identify detected access points. After all scan iterations are completed, candidates access points are identified based on the scan results, e.g., based on the signal strength measurements for the detected access points and a detection threshold. The best candidate access point may be selected for association by the terminal | 09-12-2013 |
20150049752 | COLLISION AVOIDANCE FOR TRAFFIC IN A WIRELESS NETWORK - Techniques for avoiding collision of traffic in a wireless network are described. A station detects for synchronization of its traffic with traffic of other stations. The station may detect for synchronization based on, e.g., percentage of first transmission failures, counters indicative of statistics of transmitted frames, and/or other information. The station may confirm synchronization of its traffic, e.g., by monitoring for traffic from another station during a service period for the station. The station adjusts transmission of its traffic when synchronization is detected to avoid collision with the traffic of the other stations. The station may delay transmission of its traffic by a predetermined amount of time, by a pseudo-random amount, or until after the other stations finish their transmissions. | 02-19-2015 |
Patent application number | Description | Published |
20150277995 | USING LOOPBACK INTERFACES OF MULTIPLE TCP/IP STACKS FOR COMMUNICATION BETWEEN PROCESSES - Multiple TCP/IP stack processors on a host. The multiple TCP/IP stack processors are provided independently of TCP/IP stack processors implemented by virtual machines on the host. The TCP/IP stack processors provide multiple different default gateway addresses for use with multiple processes. The default gateway addresses allow a service to communicate across an L3 network. Processes outside of virtual machines that utilize the TCP/IP stack processor on a first host can benefit from using their own gateway, and communicate with their peer process on a second host, regardless of whether the second host is located within the same subnet or a different subnet. The multiple TCP/IP stack processors can use separately allocated resources. Separate TCP/IP stack processors can be provided for each of multiple tenants on the host. Separate loopback interfaces of multiple TCP/IP stack processors can be used to create separate containment for separate sets of processes on a host. | 10-01-2015 |
20150281047 | USING DIFFERENT TCP/IP STACKS FOR DIFFERENT HYPERVISOR SERVICES - Multiple TCP/IP stack processors on a host. The multiple TCP/IP stack processors are provided independently of TCP/IP stack processors implemented by virtual machines on the host. The TCP/IP stack processors provide multiple different default gateway addresses for use with multiple processes. The default gateway addresses allow a service to communicate across an L3 network. Processes outside of virtual machines that utilize the TCP/IP stack processor on a first host can benefit from using their own gateway, and communicate with their peer process on a second host, regardless of whether the second host is located within the same subnet or a different subnet. The multiple TCP/IP stack processors can use separately allocated resources. Separate TCP/IP stack processors can be provided for each of multiple tenants on the host. Separate loopback interfaces of multiple TCP/IP stack processors can be used to create separate containment for separate sets of processes on a host. | 10-01-2015 |
20150281112 | USING DIFFERENT TCP/IP STACKS WITH SEPARATELY ALLOCATED RESOURCES - Multiple TCP/IP stack processors on a host. The multiple TCP/IP stack processors are provided independently of TCP/IP stack processors implemented by virtual machines on the host. The TCP/IP stack processors provide multiple different default gateway addresses for use with multiple processes. The default gateway addresses allow a service to communicate across an L3 network. Processes outside of virtual machines that utilize the TCP/IP stack processor on a first host can benefit from using their own gateway, and communicate with their peer process on a second host, regardless of whether the second host is located within the same subnet or a different subnet. The multiple TCP/IP stack processors can use separately allocated resources. Separate TCP/IP stack processors can be provided for each of multiple tenants on the host. Separate loopback interfaces of multiple TCP/IP stack processors can be used to create separate containment for separate sets of processes on a host. | 10-01-2015 |
20150281407 | USING DIFFERENT TCP/IP STACKS FOR DIFFERENT TENANTS ON A MULTI-TENANT HOST - Multiple TCP/IP stack processors on a host. The multiple TCP/IP stack processors are provided independently of TCP/IP stack processors implemented by virtual machines on the host. The TCP/IP stack processors provide multiple different default gateway addresses for use with multiple processes. The default gateway addresses allow a service to communicate across an L3 network. Processes outside of virtual machines that utilize the TCP/IP stack processor on a first host can benefit from using their own gateway, and communicate with their peer process on a second host, regardless of whether the second host is located within the same subnet or a different subnet. The multiple TCP/IP stack processors can use separately allocated resources. Separate TCP/IP stack processors can be provided for each of multiple tenants on the host. Separate loopback interfaces of multiple TCP/IP stack processors can be used to create separate containment for separate sets of processes on a host. | 10-01-2015 |
20150381505 | Framework for Early Congestion Notification and Recovery in a Virtualized Environment - The congestion notification system of some embodiments sends congestion notification messages from lower layer (e.g., closer to a network) components to higher layer (e.g., closer to a packet sender) components. When the higher layer components receive the congestion notification messages, the higher layer components reduce the sending rate of packets (in some cases the rate is reduced to zero) to allow the lower layer components to lower congestion (i.e., create more space in their queues by sending more data packets along the series of components). In some embodiments, the higher layer components resume full speed sending of packets after a threshold time elapses without further notification of congestion. In other embodiments, the higher layer components resume full speed sending of packets after receiving a message indicating reduced congestion in the lower layers. | 12-31-2015 |
Patent application number | Description | Published |
20110026558 | LIGHT EMITTING SEMICONDUCTOR DEVICE - A fiber coupled semiconductor device and a method of manufacturing of such a device are disclosed. The method provides an improved stability of optical coupling during assembly of the device, whereby a higher optical power levels and higher overall efficiency of the fiber coupled device can be achieved. The improvement is achieved by attaching the optical fiber to a vertical mounting surface of a fiber mount. The platform holding the semiconductor chip and the optical fiber can be mounted onto a spacer mounted on a base. The spacer has an area smaller than the area of the platform, for mechanical decoupling of thermally induced deformation of the base from a deformation of the platform of the semiconductor device. Optionally, attaching the fiber mount to a submount of the semiconductor chip further improves thermal stability of the packaged device. | 02-03-2011 |
20110026877 | SEMICONDUCTOR DEVICE ASSEMBLY - A fiber coupled semiconductor device having an improved optical stability with respect to temperature variation is disclosed. The stability improvement is achieved by placing the platform holding the semiconductor chip and the optical fiber onto a spacer mounted on a base. The spacer has an area smaller than the area of the platform, for mechanical decoupling of thermally induced deformation of the base from a deformation of the platform of the semiconductor device. Attaching the optical fiber to a vertical mounting surface of a fiber mount, and additionally attaching the fiber mount to a submount of the semiconductor chip further improves thermal stability of the packaged device. | 02-03-2011 |
20110158594 | OPTICAL MODULE WITH FIBER FEEDTHROUGH - A molded ceramic or glass ferrule has at least one longitudinal passage, which enables an optical fiber feed through, sealed into a metal housing with glass solder. The metal material in the housing has a slightly higher coefficient of thermal expansion (CTE) than the ferrule material and the sealing glass so that hermetic seal is maintained by a compression stress applied to the ferrule and sealing glass by the housing at operating conditions. When the housing has to be fabricated from a low CTE material, e.g. metal or ceramic, a metal sleeve and stress relief bracket is used to apply the compression stress. | 06-30-2011 |
Patent application number | Description | Published |
20120173497 | DEFENSE-IN-DEPTH SECURITY FOR BYTECODE EXECUTABLES - Defense-in Depth security defines a set of graduated security tasks, each of which performs a task that must complete before another task can complete. Only when these tasks complete successfully and in the order prescribed by Defense-in-Depth security criteria is a final process allowed to execute. Through such Defense-in-Depth security measures, vulnerable software, such as bytecode, can be verified as unaltered and executed in a secure environment that prohibits unsecured access to the underlying code. | 07-05-2012 |
20120173691 | ABSTRACT REPRESENTATION AND PROVISIONING OF NETWORK SERVICES - A network management device connects to a device on the network, receives a trigger for an operation command, supplies to the device a command line interface command for the operation command, wherein a randomly generated string is included at the end of the command line interface command. The network management device receives the output of the operation command from the device, detects the end of the operation command output and parses the output using an XML-based parser. XML based configuration files are used for configuration of different network devices. XML based report files are used to generate different network reports. | 07-05-2012 |
20120174086 | Extensible Patch Management - Extensible patch management provides mechanisms by which data, database and binaries for one or more components of an application may be updated. The patch framework extends patch related functionality at different devices as needed to perform a software patch in a manner that allows such functionality to be retained at the device. Additionally, the patch framework is platform independent and thus allows the same patch related software to be distributed and executed across different platforms. | 07-05-2012 |
20150081900 | Abstract Representation and Provisioning of Network Services - A network management device connects to a device on the network, receives a trigger for an operation command, supplies to the device a command line interface command for the operation command, wherein a randomly generated string is included at the end of the command line interface command. The network management device receives the output of the operation command from the device, detects the end of the operation command output and parses the output using an XML-based parser. XML based configuration files are used for configuration of different network devices. XML based report files are used to generate different network reports. | 03-19-2015 |
Patent application number | Description | Published |
20080301113 | SYSTEM AND METHOD FOR PROVIDING VECTOR TERMS RELATED TO A SEARCH QUERY - The present invention relates to providing vector terms for use in formulating search requests in response to a user query. The method according to one embodiment comprises receiving a search query from a client and identifying links to one or more content items corresponding to the search query. One or more term vectors are then generated corresponding to the content items and one or more vector terms are selected from the term vectors. The links to the one or more content items and selected vector terms are combined to form a final result page. | 12-04-2008 |
20090089267 | SYSTEM AND METHOD FOR EDITING HISTORY IN A SEARCH RESULTS PAGE - The present invention is directed towards systems, methods and computer program products for controlling a user history module. According to one embodiment, a method for controlling a user history module comprises providing a history module to a user, the history module comprising a plurality of search queries and a plurality of selected search results, and monitoring user interaction with the user search history panel. A predetermined operation is performed on the history module in response to a user interaction. | 04-02-2009 |
20090089311 | SYSTEM AND METHOD FOR INCLUSION OF HISTORY IN A SEARCH RESULTS PAGE - The present invention is directed towards systems, methods and computer program products for maintaining a record of a user search history. According to one embodiment, a method for maintaining a record of a user search history comprises receiving a plurality of user search queries, storing the user search queries and providing a plurality of search results related to the user search queries. User interaction with the plurality of search results is monitored and the interactions with at least one of the search results are stored. A history module comprising the user search queries and the user interactions is displayed to the user on a display device. | 04-02-2009 |
20090089312 | SYSTEM AND METHOD FOR INCLUSION OF INTERACTIVE ELEMENTS ON A SEARCH RESULTS PAGE - The present invention is directed to system, methods and computer program products for generating a graphical module for the display of a query-specific content. The method according to one embodiment comprises receiving a query, determining a category identifier for the query and retrieving a category template corresponding to the category identifier for the query. At least one template query is performed, the template query corresponding to a request for data specified by the category template, and a template module is generated that comprises the data retrieved by the template query. The template module is combined with a search results page responsive to the query for display to a user. | 04-02-2009 |