DatasheetQ Logo
Electronic component search and free download site. Transistors,MosFET ,Diode,Integrated circuits

T2080 View Datasheet(PDF) - Freescale Semiconductor

Part Name
Description
Manufacturer
T2080
Freescale
Freescale Semiconductor Freescale
T2080 Datasheet PDF : 29 Pages
1 2 3 4 5 6 7 8 9 10 Next Last
Chip features
• Provides system software with an efficient means to move data and perform cache operations between two
disjoint address spaces
• Eliminates the need to copy data from a source context into a kernel context, change to destination address
space, then copy the data to the destination address space or alternatively to map the user space into the
kernel address space
The arrangement of cores into clusters with shared L2 caches is part of a major re-architecture of the QorIQ cache hierarchy.
Details of the banked L2 are provided below.
• 2 MB cache with ECC protection (data, tag, & status)
• 64-byte cache line size
• 16 way, set associative
• Ways in each bank can be configured in one of several modes
• Flexible way partitioning per vCPU
• I-only, D-only, or unified
• Supports direct stashing of datapath architecture data into L2
4.5 Inverted cache hierarchy
From the perspective of software running on an core vCPU, the SoC incorporates a 2-level cache hierarchy. These levels are
as follows:
• Level 1: Individual core 32 KB Instruction and Data caches
• Level 2: Locally banked 2 MB cache (configurably shared by other vCPUs in the cluster)
Therefore, the CPC is not intended to act as backing store for the L2s. This allows the CPCs to be dedicated to the non-CPU
masters in the SoC, storing DPAA data structures and IO data that the CPUs and accelerators will most likely need.
Although the SoC supports allocation policies that would result in CPU instructions and in data being held in the CPC (CPC
acting as vCPU L3), this is not the default. Because the CPC serves fewer masters, it serves those masters better, by reducing
the DDR bandwidth consumed by the DPAA and improving the average latency.
4.6 CoreNet fabric and address map
As Freescale's next generation front-side interconnect standard for multicore products, the CoreNet fabric provides the
following:
• A highly concurrent, fully cache coherent, multi-ported fabric
• Point-to-point connectivity with flexible protocol architecture allows for pipelined interconnection between CPUs,
platform caches, memory controllers, and I/O and accelerators at up to 800 MHz
• The CoreNet fabric has been designed to overcome bottlenecks associated with shared bus architectures, particularly
address issue and data bandwidth limitations. The chip's multiple, parallel address paths allow for high address
bandwidth, which is a key performance indicator for large coherent multicore processors.
• Eliminates address retries, triggered by CPUs being unable to snoop within the narrow snooping window of a shared
bus. This results in the chip having lower average memory latency.
This chip's 40-bit, physical address map consists of local space and external address space. For the local address map, 32
local access windows (LAWs) define mapping within the local 40-bit (1 TB) address space. Inbound and outbound
translation windows can map the chip into a larger system address space such as the RapidIO or PCIe 64-bit address
environment. This functionality is included in the address translation and mapping units (ATMUs).
T2080 Product Brief, Rev 0, 04/2014
Freescale Semiconductor, Inc.
7

Share Link: 

datasheetq.com  [ Privacy Policy ]Request Datasheet ] [ Contact Us ]