mhi: add snapshot for MHI driver stack

Add snapshot for MHI driver stack from msm-4.19 commit
6cc10bff8d4a ("Bluetooth: Re-initialize regulator to NULL on error").
Additional changes are made to pass check_patch errors.

Change-Id: I4dbe727e25d01967dd9cbe60125af47d3bab3f20
Signed-off-by: Hemant Kumar <hemantk@codeaurora.org>
This commit is contained in:
Hemant Kumar 2019-11-14 15:51:41 -08:00
parent 567264c6cb
commit e689af8a2e
23 changed files with 13999 additions and 0 deletions

276
Documentation/mhi.txt Normal file
View File

@ -0,0 +1,276 @@
Overview of Linux kernel MHI support
====================================
Modem-Host Interface (MHI)
=========================
MHI used by the host to control and communicate with modem over high speed
peripheral bus. Even though MHI can be easily adapt to any peripheral buses,
primarily used with PCIe based devices. The host has one or more PCIe root
ports connected to modem device. The host has limited access to device memory
space, including register configuration and control the device operation.
Data transfers are invoked from the device.
All data structures used by MHI are in the host system memory. Using PCIe
interface, the device accesses those data structures. MHI data structures and
data buffers in the host system memory regions are mapped for device.
Memory spaces
-------------
PCIe Configurations : Used for enumeration and resource management, such as
interrupt and base addresses. This is done by mhi control driver.
MMIO
----
MHI MMIO : Memory mapped IO consists of set of registers in the device hardware,
which are mapped to the host memory space through PCIe base address register
(BAR)
MHI control registers : Access to MHI configurations registers
(struct mhi_controller.regs).
MHI BHI register: Boot host interface registers (struct mhi_controller.bhi) used
for firmware download before MHI initialization.
Channel db array : Doorbell registers (struct mhi_chan.tre_ring.db_addr) used by
host to notify device there is new work to do.
Event db array : Associated with event context array
(struct mhi_event.ring.db_addr), host uses to notify device free events are
available.
Data structures
---------------
Host memory : Directly accessed by the host to manage the MHI data structures
and buffers. The device accesses the host memory over the PCIe interface.
Channel context array : All channel configurations are organized in channel
context data array.
struct __packed mhi_chan_ctxt;
struct mhi_ctxt.chan_ctxt;
Transfer rings : Used by host to schedule work items for a channel and organized
as a circular queue of transfer descriptors (TD).
struct __packed mhi_tre;
struct mhi_chan.tre_ring;
Event context array : All event configurations are organized in event context
data array.
struct mhi_ctxt.er_ctxt;
struct __packed mhi_event_ctxt;
Event rings: Used by device to send completion and state transition messages to
host
struct mhi_event.ring;
struct __packed mhi_tre;
Command context array: All command configurations are organized in command
context data array.
struct __packed mhi_cmd_ctxt;
struct mhi_ctxt.cmd_ctxt;
Command rings: Used by host to send MHI commands to device
struct __packed mhi_tre;
struct mhi_cmd.ring;
Transfer rings
--------------
MHI channels are logical, unidirectional data pipes between host and device.
Each channel associated with a single transfer ring. The data direction can be
either inbound (device to host) or outbound (host to device). Transfer
descriptors are managed by using transfer rings, which are defined for each
channel between device and host and resides in the host memory.
Transfer ring Pointer: Transfer Ring Array
[Read Pointer (RP)] ----------->[Ring Element] } TD
[Write Pointer (WP)]- [Ring Element]
- [Ring Element]
--------->[Ring Element]
[Ring Element]
1. Host allocate memory for transfer ring
2. Host sets base, read pointer, write pointer in corresponding channel context
3. Ring is considered empty when RP == WP
4. Ring is considered full when WP + 1 == RP
4. RP indicates the next element to be serviced by device
4. When host new buffer to send, host update the Ring element with buffer
information
5. Host increment the WP to next element
6. Ring the associated channel DB.
Event rings
-----------
Events from the device to host are organized in event rings and defined in event
descriptors. Event rings are array of EDs that resides in the host memory.
Transfer ring Pointer: Event Ring Array
[Read Pointer (RP)] ----------->[Ring Element] } ED
[Write Pointer (WP)]- [Ring Element]
- [Ring Element]
--------->[Ring Element]
[Ring Element]
1. Host allocate memory for event ring
2. Host sets base, read pointer, write pointer in corresponding channel context
3. Both host and device has local copy of RP, WP
3. Ring is considered empty (no events to service) when WP + 1 == RP
4. Ring is full of events when RP == WP
4. RP - 1 = last event device programmed
4. When there is a new event device need to send, device update ED pointed by RP
5. Device increment RP to next element
6. Device trigger and interrupt
Example Operation for data transfer:
1. Host prepare TD with buffer information
2. Host increment Chan[id].ctxt.WP
3. Host ring channel DB register
4. Device wakes up process the TD
5. Device generate a completion event for that TD by updating ED
6. Device increment Event[id].ctxt.RP
7. Device trigger MSI to wake host
8. Host wakes up and check event ring for completion event
9. Host update the Event[i].ctxt.WP to indicate processed of completion event.
Time sync
---------
To synchronize two applications between host and external modem, MHI provide
native support to get external modems free running timer value in a fast
reliable method. MHI clients do not need to create client specific methods to
get modem time.
When client requests modem time, MHI host will automatically capture host time
at that moment so clients are able to do accurate drift adjustment.
Example:
Client request time @ time T1
Host Time: Tx
Modem Time: Ty
Client request time @ time T2
Host Time: Txx
Modem Time: Tyy
Then drift is:
Tyy - Ty + <drift> == Txx - Tx
Clients are free to implement their own drift algorithms, what MHI host provide
is a way to accurately correlate host time with external modem time.
To avoid link level latencies, controller must support capabilities to disable
any link level latency.
During Time capture host will:
1. Capture host time
2. Trigger doorbell to capture modem time
It's important time between Step 2 to Step 1 is deterministic as possible.
Therefore, MHI host will:
1. Disable any MHI related to low power modes.
2. Disable preemption
3. Request bus master to disable any link level latencies. Controller
should disable all low power modes such as L0s, L1, L1ss.
MHI States
----------
enum MHI_STATE {
MHI_STATE_RESET : MHI is in reset state, POR state. Host is not allowed to
access device MMIO register space.
MHI_STATE_READY : Device is ready for initialization. Host can start MHI
initialization by programming MMIO
MHI_STATE_M0 : MHI is in fully active state, data transfer is active
MHI_STATE_M1 : Device in a suspended state
MHI_STATE_M2 : MHI in low power mode, device may enter lower power mode.
MHI_STATE_M3 : Both host and device in suspended state. PCIe link is not
accessible to device.
MHI Initialization
------------------
1. After system boots, the device is enumerated over PCIe interface
2. Host allocate MHI context for event, channel and command arrays
3. Initialize context array, and prepare interrupts
3. Host waits until device enter READY state
4. Program MHI MMIO registers and set device into MHI_M0 state
5. Wait for device to enter M0 state
Linux Software Architecture
===========================
MHI Controller
--------------
MHI controller is also the MHI bus master. In charge of managing the physical
link between host and device. Not involved in actual data transfer. At least
for PCIe based buses, for other type of bus, we can expand to add support.
Roles:
1. Turn on PCIe bus and configure the link
2. Configure MSI, SMMU, and IOMEM
3. Allocate struct mhi_controller and register with MHI bus framework
2. Initiate power on and shutdown sequence
3. Initiate suspend and resume
Usage
-----
1. Allocate control data structure by calling mhi_alloc_controller()
2. Initialize mhi_controller with all the known information such as:
- Device Topology
- IOMMU window
- IOMEM mapping
- Device to use for memory allocation, and of_node with DT configuration
- Configure asynchronous callback functions
3. Register MHI controller with MHI bus framework by calling
of_register_mhi_controller()
After successfully registering controller can initiate any of these power modes:
1. Power up sequence
- mhi_prepare_for_power_up()
- mhi_async_power_up()
- mhi_sync_power_up()
2. Power down sequence
- mhi_power_down()
- mhi_unprepare_after_power_down()
3. Initiate suspend
- mhi_pm_suspend()
4. Initiate resume
- mhi_pm_resume()
MHI Devices
-----------
Logical device that bind to maximum of two physical MHI channels. Once MHI is in
powered on state, each supported channel by controller will be allocated as a
mhi_device.
Each supported device would be enumerated under
/sys/bus/mhi/devices/
struct mhi_device;
MHI Driver
----------
Each MHI driver can bind to one or more MHI devices. MHI host driver will bind
mhi_device to mhi_driver.
All registered drivers are visible under
/sys/bus/mhi/drivers/
struct mhi_driver;
Usage
-----
1. Register driver using mhi_driver_register
2. Before sending data, prepare device for transfer by calling
mhi_prepare_for_transfer
3. Initiate data transfer by calling mhi_queue_transfer
4. After finish, call mhi_unprepare_from_transfer to end data transfer

View File

@ -191,6 +191,25 @@ config DA8XX_MSTPRI
configuration. Allows to adjust the priorities of all master
peripherals.
config MHI_BUS
tristate "Modem Host Interface"
help
MHI Host Interface is a communication protocol to be used by the host
to control and communcate with modem over a high speed peripheral bus.
Enabling this module will allow host to communicate with external
devices that support MHI protocol.
config MHI_DEBUG
bool "MHI debug support"
depends on MHI_BUS
help
Say yes here to enable debugging support in the MHI transport
and individual MHI client drivers. This option will impact
throughput as individual MHI packets and state transitions
will be logged.
source "drivers/bus/fsl-mc/Kconfig"
source "drivers/bus/mhi/controllers/Kconfig"
source "drivers/bus/mhi/devices/Kconfig"
endmenu

View File

@ -33,3 +33,4 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o
obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o
obj-$(CONFIG_MHI_BUS) += mhi/

9
drivers/bus/mhi/Makefile Normal file
View File

@ -0,0 +1,9 @@
# SPDX-License-Identifier: GPL-2.0-only
#
# Makefile for the MHI stack
#
# core layer
obj-y += core/
obj-y += controllers/
obj-y += devices/

View File

@ -0,0 +1,15 @@
# SPDX-License-Identifier: GPL-2.0-only
menu "MHI controllers"
config MHI_QCOM
tristate "MHI QCOM"
depends on MHI_BUS
help
If you say yes to this option, MHI bus support for QCOM modem chipsets
will be enabled. QCOM PCIe based modems uses MHI as the communication
protocol. MHI control driver is the bus master for such modems. As the
bus master driver, it oversees power management operations such as
suspend, resume, powering on and off the device.
endmenu

View File

@ -0,0 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-$(CONFIG_MHI_QCOM) += mhi_qcom_drv.o
mhi_qcom_drv-y := mhi_qcom.o mhi_arch_qcom.o

View File

@ -0,0 +1,750 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.*/
#include <asm/dma-iommu.h>
#include <linux/async.h>
#include <linux/debugfs.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/esoc_client.h>
#include <linux/interconnect.h>
#include <linux/list.h>
#include <linux/of.h>
#include <linux/memblock.h>
#include <linux/module.h>
#include <linux/msm_pcie.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <linux/suspend.h>
#include <linux/mhi.h>
#include "mhi_qcom.h"
struct arch_info {
struct mhi_dev *mhi_dev;
struct esoc_desc *esoc_client;
struct esoc_client_hook esoc_ops;
struct icc_path *icc_path;
u32 *icc_peak_bw;
u32 icc_peak_bw_len;
struct msm_pcie_register_event pcie_reg_event;
struct pci_saved_state *pcie_state;
async_cookie_t cookie;
void *boot_ipc_log;
void *tsync_ipc_log;
struct mhi_device *boot_dev;
bool drv_connected;
struct notifier_block pm_notifier;
struct completion pm_completion;
};
/* ipc log markings */
#define DLOG "Dev->Host: "
#define HLOG "Host: "
#define MHI_TSYNC_LOG_PAGES (10)
#ifdef CONFIG_MHI_DEBUG
#define MHI_IPC_LOG_PAGES (100)
enum MHI_DEBUG_LEVEL mhi_ipc_log_lvl = MHI_MSG_LVL_VERBOSE;
#else
#define MHI_IPC_LOG_PAGES (10)
enum MHI_DEBUG_LEVEL mhi_ipc_log_lvl = MHI_MSG_LVL_ERROR;
#endif
static int mhi_arch_pm_notifier(struct notifier_block *nb,
unsigned long event, void *unused)
{
struct arch_info *arch_info =
container_of(nb, struct arch_info, pm_notifier);
switch (event) {
case PM_SUSPEND_PREPARE:
reinit_completion(&arch_info->pm_completion);
break;
case PM_POST_SUSPEND:
complete_all(&arch_info->pm_completion);
break;
}
return NOTIFY_DONE;
}
void mhi_arch_timesync_log(struct mhi_controller *mhi_cntrl, u64 remote_time)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
if (remote_time != U64_MAX)
ipc_log_string(arch_info->tsync_ipc_log, "%6u.%06lu 0x%llx",
REMOTE_TICKS_TO_SEC(remote_time),
REMOTE_TIME_REMAINDER_US(remote_time),
remote_time);
}
static int mhi_arch_set_bus_request(struct mhi_controller *mhi_cntrl, int index)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
MHI_LOG("Setting bus request to index %d\n", index);
if (index >= arch_info->icc_peak_bw_len)
return -EINVAL;
if (arch_info->icc_path)
return icc_set_bw(arch_info->icc_path, 0,
arch_info->icc_peak_bw[index]);
/* default return success */
return 0;
}
static void mhi_arch_pci_link_state_cb(struct msm_pcie_notify *notify)
{
struct mhi_controller *mhi_cntrl = notify->data;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct pci_dev *pci_dev = mhi_dev->pci_dev;
struct arch_info *arch_info = mhi_dev->arch_info;
switch (notify->event) {
case MSM_PCIE_EVENT_WAKEUP:
MHI_LOG("Received MSM_PCIE_EVENT_WAKE signal\n");
/* bring link out of d3cold */
if (mhi_dev->powered_on) {
pm_runtime_get(&pci_dev->dev);
pm_runtime_put_noidle(&pci_dev->dev);
}
break;
case MSM_PCIE_EVENT_L1SS_TIMEOUT:
MHI_VERB("Received MSM_PCIE_EVENT_L1SS_TIMEOUT signal\n");
pm_runtime_mark_last_busy(&pci_dev->dev);
pm_request_autosuspend(&pci_dev->dev);
break;
case MSM_PCIE_EVENT_DRV_CONNECT:
/* drv is connected we can suspend now */
MHI_LOG("Received MSM_PCIE_EVENT_DRV_CONNECT signal\n");
arch_info->drv_connected = true;
mutex_lock(&mhi_cntrl->pm_mutex);
/* if we're in amss attempt a suspend */
if (mhi_dev->powered_on && mhi_cntrl->ee == MHI_EE_AMSS) {
pm_runtime_allow(&pci_dev->dev);
pm_runtime_mark_last_busy(&pci_dev->dev);
pm_request_autosuspend(&pci_dev->dev);
}
mutex_unlock(&mhi_cntrl->pm_mutex);
break;
case MSM_PCIE_EVENT_DRV_DISCONNECT:
MHI_LOG("Received MSM_PCIE_EVENT_DRV_DISCONNECT signal\n");
/*
* if link suspended bring it out of suspend and disable runtime
* suspend
*/
arch_info->drv_connected = false;
pm_runtime_forbid(&pci_dev->dev);
break;
default:
MHI_ERR("Unhandled event 0x%x\n", notify->event);
}
}
static int mhi_arch_esoc_ops_power_on(void *priv, unsigned int flags)
{
struct mhi_controller *mhi_cntrl = priv;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct pci_dev *pci_dev = mhi_dev->pci_dev;
int ret;
mutex_lock(&mhi_cntrl->pm_mutex);
if (mhi_dev->powered_on) {
MHI_LOG("MHI still in active state\n");
mutex_unlock(&mhi_cntrl->pm_mutex);
return 0;
}
MHI_LOG("Enter\n");
/* reset rpm state */
pm_runtime_set_active(&pci_dev->dev);
pm_runtime_enable(&pci_dev->dev);
mutex_unlock(&mhi_cntrl->pm_mutex);
pm_runtime_forbid(&pci_dev->dev);
ret = pm_runtime_get_sync(&pci_dev->dev);
if (ret < 0) {
MHI_ERR("Error with rpm resume, ret:%d\n", ret);
return ret;
}
/* re-start the link & recover default cfg states */
ret = msm_pcie_pm_control(MSM_PCIE_RESUME, pci_dev->bus->number,
pci_dev, NULL, 0);
if (ret) {
MHI_ERR("Failed to resume pcie bus ret %d\n", ret);
return ret;
}
mhi_dev->mdm_state = (flags & ESOC_HOOK_MDM_CRASH);
return mhi_pci_probe(pci_dev, NULL);
}
static void mhi_arch_link_off(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct pci_dev *pci_dev = mhi_dev->pci_dev;
MHI_LOG("Entered\n");
pci_set_power_state(pci_dev, PCI_D3hot);
/* release the resources */
msm_pcie_pm_control(MSM_PCIE_SUSPEND, mhi_cntrl->bus, pci_dev, NULL, 0);
mhi_arch_set_bus_request(mhi_cntrl, 0);
MHI_LOG("Exited\n");
}
static void mhi_arch_esoc_ops_power_off(void *priv, unsigned int flags)
{
struct mhi_controller *mhi_cntrl = priv;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
struct pci_dev *pci_dev = mhi_dev->pci_dev;
bool mdm_state = (flags & ESOC_HOOK_MDM_CRASH);
MHI_LOG("Enter: mdm_crashed:%d\n", mdm_state);
/*
* Abort system suspend if system is preparing to go to suspend
* by grabbing wake source.
* If system is suspended, wait for pm notifier callback to notify
* that resume has occurred with PM_POST_SUSPEND event.
*/
pm_stay_awake(&mhi_cntrl->mhi_dev->dev);
wait_for_completion(&arch_info->pm_completion);
/* if link is in drv suspend, wake it up */
pm_runtime_get_sync(&pci_dev->dev);
mutex_lock(&mhi_cntrl->pm_mutex);
if (!mhi_dev->powered_on) {
MHI_LOG("Not in active state\n");
mutex_unlock(&mhi_cntrl->pm_mutex);
pm_runtime_put_noidle(&pci_dev->dev);
return;
}
mhi_dev->powered_on = false;
mutex_unlock(&mhi_cntrl->pm_mutex);
pm_runtime_put_noidle(&pci_dev->dev);
MHI_LOG("Triggering shutdown process\n");
mhi_power_down(mhi_cntrl, !mdm_state);
/* turn the link off */
mhi_deinit_pci_dev(mhi_cntrl);
mhi_arch_link_off(mhi_cntrl);
/* wait for boot monitor to exit */
async_synchronize_cookie(arch_info->cookie + 1);
mhi_arch_pcie_deinit(mhi_cntrl);
mhi_cntrl->dev = NULL;
pm_relax(&mhi_cntrl->mhi_dev->dev);
}
static void mhi_arch_esoc_ops_mdm_error(void *priv)
{
struct mhi_controller *mhi_cntrl = priv;
MHI_LOG("Enter: mdm asserted\n");
/* transition MHI state into error state */
mhi_control_error(mhi_cntrl);
MHI_LOG("Exit\n");
}
static void mhi_bl_dl_cb(struct mhi_device *mhi_device,
struct mhi_result *mhi_result)
{
struct mhi_controller *mhi_cntrl = mhi_device->mhi_cntrl;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
char *buf = mhi_result->buf_addr;
/* force a null at last character */
buf[mhi_result->bytes_xferd - 1] = 0;
ipc_log_string(arch_info->boot_ipc_log, "%s %s", DLOG, buf);
}
static void mhi_bl_dummy_cb(struct mhi_device *mhi_dev,
struct mhi_result *mhi_result)
{
}
static void mhi_bl_remove(struct mhi_device *mhi_device)
{
struct mhi_controller *mhi_cntrl = mhi_device->mhi_cntrl;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
arch_info->boot_dev = NULL;
ipc_log_string(arch_info->boot_ipc_log,
HLOG "Received Remove notif.\n");
}
static void mhi_boot_monitor(void *data, async_cookie_t cookie)
{
struct mhi_controller *mhi_cntrl = data;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
struct mhi_device *boot_dev;
/* 15 sec timeout for booting device */
const u32 timeout = msecs_to_jiffies(15000);
/* wait for device to enter boot stage */
wait_event_timeout(mhi_cntrl->state_event, mhi_cntrl->ee == MHI_EE_AMSS
|| mhi_cntrl->ee == MHI_EE_DISABLE_TRANSITION
|| mhi_cntrl->power_down,
timeout);
ipc_log_string(arch_info->boot_ipc_log, HLOG "Device current ee = %s\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee));
/* if we successfully booted to amss disable boot log channel */
if (mhi_cntrl->ee == MHI_EE_AMSS) {
boot_dev = arch_info->boot_dev;
if (boot_dev)
mhi_unprepare_from_transfer(boot_dev);
if (!mhi_dev->drv_supported || arch_info->drv_connected)
pm_runtime_allow(&mhi_dev->pci_dev->dev);
}
}
int mhi_arch_power_up(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
/* start a boot monitor if not in crashed state */
if (!mhi_dev->mdm_state)
arch_info->cookie = async_schedule(mhi_boot_monitor, mhi_cntrl);
return 0;
}
static int mhi_arch_pcie_scale_bw(struct mhi_controller *mhi_cntrl,
struct pci_dev *pci_dev,
struct mhi_link_info *link_info)
{
int ret;
mhi_cntrl->lpm_disable(mhi_cntrl, mhi_cntrl->priv_data);
ret = msm_pcie_set_link_bandwidth(pci_dev, link_info->target_link_speed,
link_info->target_link_width);
mhi_cntrl->lpm_enable(mhi_cntrl, mhi_cntrl->priv_data);
if (ret)
return ret;
/* do a bus scale vote based on gen speeds */
mhi_arch_set_bus_request(mhi_cntrl, link_info->target_link_speed);
MHI_VERB("bw changed to speed:0x%x width:0x%x\n",
link_info->target_link_speed, link_info->target_link_width);
return 0;
}
static int mhi_arch_bw_scale(struct mhi_controller *mhi_cntrl,
struct mhi_link_info *link_info)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct pci_dev *pci_dev = mhi_dev->pci_dev;
return mhi_arch_pcie_scale_bw(mhi_cntrl, pci_dev, link_info);
}
static int mhi_bl_probe(struct mhi_device *mhi_device,
const struct mhi_device_id *id)
{
char node_name[32];
struct mhi_controller *mhi_cntrl = mhi_device->mhi_cntrl;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
snprintf(node_name, sizeof(node_name), "mhi_bl_%04x_%02u.%02u.%02u",
mhi_device->dev_id, mhi_device->domain, mhi_device->bus,
mhi_device->slot);
arch_info->boot_dev = mhi_device;
arch_info->boot_ipc_log = ipc_log_context_create(MHI_IPC_LOG_PAGES,
node_name, 0);
ipc_log_string(arch_info->boot_ipc_log, HLOG
"Entered SBL, Session ID:0x%x\n", mhi_cntrl->session_id);
return 0;
}
static const struct mhi_device_id mhi_bl_match_table[] = {
{ .chan = "BL" },
{},
};
static struct mhi_driver mhi_bl_driver = {
.id_table = mhi_bl_match_table,
.remove = mhi_bl_remove,
.probe = mhi_bl_probe,
.ul_xfer_cb = mhi_bl_dummy_cb,
.dl_xfer_cb = mhi_bl_dl_cb,
.driver = {
.name = "MHI_BL",
.owner = THIS_MODULE,
},
};
int mhi_arch_pcie_init(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
struct mhi_link_info *cur_link_info;
char node[32];
int ret;
int size = 0;
u16 linkstat;
if (!arch_info) {
struct msm_pcie_register_event *reg_event;
struct pci_dev *root_port;
struct device_node *root_ofnode;
arch_info = devm_kzalloc(&mhi_dev->pci_dev->dev,
sizeof(*arch_info), GFP_KERNEL);
if (!arch_info)
return -ENOMEM;
mhi_dev->arch_info = arch_info;
arch_info->mhi_dev = mhi_dev;
snprintf(node, sizeof(node), "mhi_%04x_%02u.%02u.%02u",
mhi_cntrl->dev_id, mhi_cntrl->domain, mhi_cntrl->bus,
mhi_cntrl->slot);
mhi_cntrl->log_buf = ipc_log_context_create(MHI_IPC_LOG_PAGES,
node, 0);
mhi_cntrl->log_lvl = mhi_ipc_log_lvl;
snprintf(node, sizeof(node), "mhi_tsync_%04x_%02u.%02u.%02u",
mhi_cntrl->dev_id, mhi_cntrl->domain, mhi_cntrl->bus,
mhi_cntrl->slot);
arch_info->tsync_ipc_log = ipc_log_context_create(
MHI_TSYNC_LOG_PAGES, node, 0);
if (arch_info->tsync_ipc_log)
mhi_cntrl->tsync_log = mhi_arch_timesync_log;
/* check if root-complex support DRV */
root_port = pci_find_pcie_root_port(mhi_dev->pci_dev);
root_ofnode = root_port->dev.of_node;
if (root_ofnode->parent)
mhi_dev->drv_supported =
of_property_read_bool(root_ofnode->parent,
"qcom,drv-supported");
/* get icc path from PCIe platform device */
arch_info->icc_path = of_icc_get(root_port->dev.parent->parent,
"ep_icc_path");
if (IS_ERR_OR_NULL(arch_info->icc_path)) {
ret = arch_info->icc_path ?
PTR_ERR(arch_info->icc_path) : -EINVAL;
return ret;
}
of_get_property(pdev->dev.of_node, "icc_peak_bw", &size);
if (!size)
return -EINVAL;
arch_info->icc_peak_bw_len = size / sizeof(u32);
arch_info->icc_peak_bw = devm_kcalloc(&mhi_dev->pci_dev->dev,
arch_info->icc_peak_bw_len,
sizeof(u32), GFP_KERNEL);
if (!arch_info->icc_peak_bw)
return -ENOMEM;
ret = of_property_read_u32_array(mhi_dev->pci_dev->dev.of_node,
"icc-peak-bw", arch_info->icc_peak_len);
if (ret)
return -EINVAL;
/* register with pcie rc for WAKE# events */
reg_event = &arch_info->pcie_reg_event;
reg_event->events =
MSM_PCIE_EVENT_WAKEUP | MSM_PCIE_EVENT_L1SS_TIMEOUT;
/* if drv supported register for drv connection events */
if (mhi_dev->drv_supported)
reg_event->events |= MSM_PCIE_EVENT_DRV_CONNECT |
MSM_PCIE_EVENT_DRV_DISCONNECT;
reg_event->user = mhi_dev->pci_dev;
reg_event->callback = mhi_arch_pci_link_state_cb;
reg_event->notify.data = mhi_cntrl;
ret = msm_pcie_register_event(reg_event);
if (ret)
MHI_LOG("Failed to reg. for link up notification\n");
init_completion(&arch_info->pm_completion);
/* register PM notifier to get post resume events */
arch_info->pm_notifier.notifier_call = mhi_arch_pm_notifier;
register_pm_notifier(&arch_info->pm_notifier);
/*
* Mark as completed at initial boot-up to allow ESOC power on
* callback to proceed if system has not gone to suspend
*/
complete_all(&arch_info->pm_completion);
arch_info->esoc_client = devm_register_esoc_client(
&mhi_dev->pci_dev->dev, "mdm");
if (IS_ERR_OR_NULL(arch_info->esoc_client)) {
MHI_ERR("Failed to register esoc client\n");
} else {
/* register for power on/off hooks */
struct esoc_client_hook *esoc_ops =
&arch_info->esoc_ops;
esoc_ops->priv = mhi_cntrl;
esoc_ops->prio = ESOC_MHI_HOOK;
esoc_ops->esoc_link_power_on =
mhi_arch_esoc_ops_power_on;
esoc_ops->esoc_link_power_off =
mhi_arch_esoc_ops_power_off;
esoc_ops->esoc_link_mdm_crash =
mhi_arch_esoc_ops_mdm_error;
ret = esoc_register_client_hook(arch_info->esoc_client,
esoc_ops);
if (ret)
MHI_ERR("Failed to register esoc ops\n");
}
/*
* MHI host driver has full autonomy to manage power state.
* Disable all automatic power collapse features
*/
msm_pcie_pm_control(MSM_PCIE_DISABLE_PC, mhi_cntrl->bus,
mhi_dev->pci_dev, NULL, 0);
mhi_dev->pci_dev->no_d3hot = true;
mhi_cntrl->bw_scale = mhi_arch_bw_scale;
mhi_driver_register(&mhi_bl_driver);
}
/* store the current bw info */
ret = pcie_capability_read_word(mhi_dev->pci_dev,
PCI_EXP_LNKSTA, &linkstat);
if (ret)
return ret;
cur_link_info = &mhi_cntrl->mhi_link_info;
cur_link_info->target_link_speed = linkstat & PCI_EXP_LNKSTA_CLS;
cur_link_info->target_link_width = (linkstat & PCI_EXP_LNKSTA_NLW) >>
PCI_EXP_LNKSTA_NLW_SHIFT;
return mhi_arch_set_bus_request(mhi_cntrl,
cur_link_info->target_link_speed);
}
void mhi_arch_pcie_deinit(struct mhi_controller *mhi_cntrl)
{
mhi_arch_set_bus_request(mhi_cntrl, 0);
}
static int mhi_arch_drv_suspend(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct pci_dev *pci_dev = mhi_dev->pci_dev;
struct mhi_link_info link_info, *cur_link_info;
bool bw_switched = false;
int ret;
cur_link_info = &mhi_cntrl->mhi_link_info;
/* if link is not in gen 1 we need to switch to gen 1 */
if (cur_link_info->target_link_speed != PCI_EXP_LNKSTA_CLS_2_5GB) {
link_info.target_link_speed = PCI_EXP_LNKSTA_CLS_2_5GB;
link_info.target_link_width = cur_link_info->target_link_width;
ret = mhi_arch_pcie_scale_bw(mhi_cntrl, pci_dev, &link_info);
if (ret) {
MHI_ERR("Failed to switch Gen1 speed\n");
return -EBUSY;
}
bw_switched = true;
}
/* do a drv hand off */
ret = msm_pcie_pm_control(MSM_PCIE_DRV_SUSPEND, mhi_cntrl->bus,
pci_dev, NULL, 0);
/*
* we failed to suspend and scaled down pcie bw.. need to scale up again
*/
if (ret && bw_switched) {
mhi_arch_pcie_scale_bw(mhi_cntrl, pci_dev, cur_link_info);
return ret;
}
return ret;
}
int mhi_arch_link_suspend(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
struct pci_dev *pci_dev = mhi_dev->pci_dev;
int ret = 0;
MHI_LOG("Entered\n");
/* disable inactivity timer */
msm_pcie_l1ss_timeout_disable(pci_dev);
switch (mhi_dev->suspend_mode) {
case MHI_DEFAULT_SUSPEND:
pci_clear_master(pci_dev);
ret = pci_save_state(mhi_dev->pci_dev);
if (ret) {
MHI_ERR("Failed with pci_save_state, ret:%d\n", ret);
goto exit_suspend;
}
arch_info->pcie_state = pci_store_saved_state(pci_dev);
pci_disable_device(pci_dev);
pci_set_power_state(pci_dev, PCI_D3hot);
/* release the resources */
msm_pcie_pm_control(MSM_PCIE_SUSPEND, mhi_cntrl->bus, pci_dev,
NULL, 0);
mhi_arch_set_bus_request(mhi_cntrl, 0);
break;
case MHI_FAST_LINK_OFF:
ret = mhi_arch_drv_suspend(mhi_cntrl);
break;
case MHI_ACTIVE_STATE:
case MHI_FAST_LINK_ON:/* keeping link on do nothing */
break;
}
exit_suspend:
if (ret)
msm_pcie_l1ss_timeout_enable(pci_dev);
MHI_LOG("Exited with ret:%d\n", ret);
return ret;
}
static int __mhi_arch_link_resume(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
struct pci_dev *pci_dev = mhi_dev->pci_dev;
struct mhi_link_info *cur_info = &mhi_cntrl->mhi_link_info;
int ret;
MHI_LOG("Entered\n");
/* request bus scale voting based on higher gen speed */
ret = mhi_arch_set_bus_request(mhi_cntrl,
cur_info->target_link_speed);
if (ret)
MHI_LOG("Could not set bus frequency, ret:%d\n", ret);
ret = msm_pcie_pm_control(MSM_PCIE_RESUME, mhi_cntrl->bus, pci_dev,
NULL, 0);
if (ret) {
MHI_ERR("Link training failed, ret:%d\n", ret);
return ret;
}
ret = pci_set_power_state(pci_dev, PCI_D0);
if (ret) {
MHI_ERR("Failed to set PCI_D0 state, ret:%d\n", ret);
return ret;
}
ret = pci_enable_device(pci_dev);
if (ret) {
MHI_ERR("Failed to enable device, ret:%d\n", ret);
return ret;
}
ret = pci_load_and_free_saved_state(pci_dev, &arch_info->pcie_state);
if (ret)
MHI_LOG("Failed to load saved cfg state\n");
pci_restore_state(pci_dev);
pci_set_master(pci_dev);
return 0;
}
int mhi_arch_link_resume(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct pci_dev *pci_dev = mhi_dev->pci_dev;
struct mhi_link_info *cur_info = &mhi_cntrl->mhi_link_info;
int ret = 0;
MHI_LOG("Entered\n");
switch (mhi_dev->suspend_mode) {
case MHI_DEFAULT_SUSPEND:
ret = __mhi_arch_link_resume(mhi_cntrl);
break;
case MHI_FAST_LINK_OFF:
ret = msm_pcie_pm_control(MSM_PCIE_RESUME, mhi_cntrl->bus,
pci_dev, NULL, 0);
if (ret ||
cur_info->target_link_speed == PCI_EXP_LNKSTA_CLS_2_5GB)
break;
/*
* BW request from device isn't for gen 1 link speed, we can
* only print an error here.
*/
if (mhi_arch_pcie_scale_bw(mhi_cntrl, pci_dev, cur_info))
MHI_ERR(
"Failed to honor bw request: speed:0x%x width:0x%x\n",
cur_info->target_link_speed,
cur_info->target_link_width);
break;
case MHI_ACTIVE_STATE:
case MHI_FAST_LINK_ON:
break;
}
if (ret) {
MHI_ERR("Link training failed, ret:%d\n", ret);
return ret;
}
msm_pcie_l1ss_timeout_enable(pci_dev);
MHI_LOG("Exited\n");
return 0;
}

View File

@ -0,0 +1,941 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.*/
#include <asm/arch_timer.h>
#include <linux/debugfs.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/list.h>
#include <linux/of.h>
#include <linux/memblock.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/mhi.h>
#include "mhi_qcom.h"
struct firmware_info {
unsigned int dev_id;
const char *fw_image;
const char *edl_image;
};
static const struct firmware_info firmware_table[] = {
{.dev_id = 0x306, .fw_image = "sdx55m/sbl1.mbn"},
{.dev_id = 0x305, .fw_image = "sdx50m/sbl1.mbn"},
{.dev_id = 0x304, .fw_image = "sbl.mbn", .edl_image = "edl.mbn"},
/* default, set to debug.mbn */
{.fw_image = "debug.mbn"},
};
static int debug_mode;
int mhi_debugfs_trigger_m0(void *data, u64 val)
{
struct mhi_controller *mhi_cntrl = data;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
MHI_LOG("Trigger M3 Exit\n");
pm_runtime_get(&mhi_dev->pci_dev->dev);
pm_runtime_put(&mhi_dev->pci_dev->dev);
return 0;
}
DEFINE_DEBUGFS_ATTRIBUTE(debugfs_trigger_m0_fops, NULL,
mhi_debugfs_trigger_m0, "%llu\n");
int mhi_debugfs_trigger_m3(void *data, u64 val)
{
struct mhi_controller *mhi_cntrl = data;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
MHI_LOG("Trigger M3 Entry\n");
pm_runtime_mark_last_busy(&mhi_dev->pci_dev->dev);
pm_request_autosuspend(&mhi_dev->pci_dev->dev);
return 0;
}
DEFINE_DEBUGFS_ATTRIBUTE(debugfs_trigger_m3_fops, NULL,
mhi_debugfs_trigger_m3, "%llu\n");
void mhi_deinit_pci_dev(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct pci_dev *pci_dev = mhi_dev->pci_dev;
pm_runtime_mark_last_busy(&pci_dev->dev);
pm_runtime_dont_use_autosuspend(&pci_dev->dev);
pm_runtime_disable(&pci_dev->dev);
/* reset counter for lpm state changes */
mhi_dev->lpm_disable_depth = 0;
pci_free_irq_vectors(pci_dev);
kfree(mhi_cntrl->irq);
mhi_cntrl->irq = NULL;
iounmap(mhi_cntrl->regs);
mhi_cntrl->regs = NULL;
pci_clear_master(pci_dev);
pci_release_region(pci_dev, mhi_dev->resn);
pci_disable_device(pci_dev);
}
static int mhi_init_pci_dev(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct pci_dev *pci_dev = mhi_dev->pci_dev;
int ret;
resource_size_t len;
int i;
mhi_dev->resn = MHI_PCI_BAR_NUM;
ret = pci_assign_resource(pci_dev, mhi_dev->resn);
if (ret) {
MHI_ERR("Error assign pci resources, ret:%d\n", ret);
return ret;
}
ret = pci_enable_device(pci_dev);
if (ret) {
MHI_ERR("Error enabling device, ret:%d\n", ret);
goto error_enable_device;
}
ret = pci_request_region(pci_dev, mhi_dev->resn, "mhi");
if (ret) {
MHI_ERR("Error pci_request_region, ret:%d\n", ret);
goto error_request_region;
}
pci_set_master(pci_dev);
mhi_cntrl->base_addr = pci_resource_start(pci_dev, mhi_dev->resn);
len = pci_resource_len(pci_dev, mhi_dev->resn);
mhi_cntrl->regs = ioremap_nocache(mhi_cntrl->base_addr, len);
if (!mhi_cntrl->regs) {
MHI_ERR("Error ioremap region\n");
goto error_ioremap;
}
ret = pci_alloc_irq_vectors(pci_dev, mhi_cntrl->msi_required,
mhi_cntrl->msi_required, PCI_IRQ_MSI);
if (IS_ERR_VALUE((ulong)ret) || ret < mhi_cntrl->msi_required) {
MHI_ERR("Failed to enable MSI, ret:%d\n", ret);
goto error_req_msi;
}
mhi_cntrl->msi_allocated = ret;
mhi_cntrl->irq = kmalloc_array(mhi_cntrl->msi_allocated,
sizeof(*mhi_cntrl->irq), GFP_KERNEL);
if (!mhi_cntrl->irq) {
ret = -ENOMEM;
goto error_alloc_msi_vec;
}
for (i = 0; i < mhi_cntrl->msi_allocated; i++) {
mhi_cntrl->irq[i] = pci_irq_vector(pci_dev, i);
if (mhi_cntrl->irq[i] < 0) {
ret = mhi_cntrl->irq[i];
goto error_get_irq_vec;
}
}
dev_set_drvdata(&pci_dev->dev, mhi_cntrl);
/* configure runtime pm */
pm_runtime_set_autosuspend_delay(&pci_dev->dev, MHI_RPM_SUSPEND_TMR_MS);
pm_runtime_use_autosuspend(&pci_dev->dev);
pm_suspend_ignore_children(&pci_dev->dev, true);
/*
* pci framework will increment usage count (twice) before
* calling local device driver probe function.
* 1st pci.c pci_pm_init() calls pm_runtime_forbid
* 2nd pci-driver.c local_pci_probe calls pm_runtime_get_sync
* Framework expect pci device driver to call
* pm_runtime_put_noidle to decrement usage count after
* successful probe and and call pm_runtime_allow to enable
* runtime suspend.
*/
pm_runtime_mark_last_busy(&pci_dev->dev);
pm_runtime_put_noidle(&pci_dev->dev);
return 0;
error_get_irq_vec:
kfree(mhi_cntrl->irq);
mhi_cntrl->irq = NULL;
error_alloc_msi_vec:
pci_free_irq_vectors(pci_dev);
error_req_msi:
iounmap(mhi_cntrl->regs);
error_ioremap:
pci_clear_master(pci_dev);
error_request_region:
pci_disable_device(pci_dev);
error_enable_device:
pci_release_region(pci_dev, mhi_dev->resn);
return ret;
}
static int mhi_runtime_suspend(struct device *dev)
{
int ret = 0;
struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
MHI_LOG("Enter\n");
mutex_lock(&mhi_cntrl->pm_mutex);
if (!mhi_dev->powered_on) {
MHI_LOG("Not fully powered, return success\n");
mutex_unlock(&mhi_cntrl->pm_mutex);
return 0;
}
/* if drv is supported we will always go into drv */
if (mhi_dev->drv_supported) {
ret = mhi_pm_fast_suspend(mhi_cntrl, true);
mhi_dev->suspend_mode = MHI_FAST_LINK_OFF;
} else {
ret = mhi_pm_suspend(mhi_cntrl);
mhi_dev->suspend_mode = MHI_DEFAULT_SUSPEND;
/* regular suspend failed, probably a client has a vote */
if (ret == -EBUSY) {
ret = mhi_pm_fast_suspend(mhi_cntrl, false);
mhi_dev->suspend_mode = MHI_FAST_LINK_ON;
}
}
if (ret) {
MHI_LOG("Abort due to ret:%d\n", ret);
mhi_dev->suspend_mode = MHI_ACTIVE_STATE;
goto exit_runtime_suspend;
}
ret = mhi_arch_link_suspend(mhi_cntrl);
/* failed suspending link abort mhi suspend */
if (ret) {
MHI_LOG("Failed to suspend link, abort suspend\n");
if (mhi_dev->suspend_mode == MHI_DEFAULT_SUSPEND)
mhi_pm_resume(mhi_cntrl);
else
mhi_pm_fast_resume(mhi_cntrl,
mhi_dev->suspend_mode == MHI_FAST_LINK_OFF);
mhi_dev->suspend_mode = MHI_ACTIVE_STATE;
}
exit_runtime_suspend:
mutex_unlock(&mhi_cntrl->pm_mutex);
MHI_LOG("Exited with ret:%d\n", ret);
return (ret < 0) ? -EBUSY : 0;
}
static int mhi_runtime_idle(struct device *dev)
{
struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
MHI_LOG("Entered returning -EBUSY\n");
/*
* RPM framework during runtime resume always calls
* rpm_idle to see if device ready to suspend.
* If dev.power usage_count count is 0, rpm fw will call
* rpm_idle cb to see if device is ready to suspend.
* if cb return 0, or cb not defined the framework will
* assume device driver is ready to suspend;
* therefore, fw will schedule runtime suspend.
* In MHI power management, MHI host shall go to
* runtime suspend only after entering MHI State M2, even if
* usage count is 0. Return -EBUSY to disable automatic suspend.
*/
return -EBUSY;
}
static int mhi_runtime_resume(struct device *dev)
{
int ret = 0;
struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
MHI_LOG("Enter\n");
mutex_lock(&mhi_cntrl->pm_mutex);
if (!mhi_dev->powered_on) {
MHI_LOG("Not fully powered, return success\n");
mutex_unlock(&mhi_cntrl->pm_mutex);
return 0;
}
/* turn on link */
ret = mhi_arch_link_resume(mhi_cntrl);
if (ret)
goto rpm_resume_exit;
/* transition to M0 state */
if (mhi_dev->suspend_mode == MHI_DEFAULT_SUSPEND)
ret = mhi_pm_resume(mhi_cntrl);
else
ret = mhi_pm_fast_resume(mhi_cntrl,
mhi_dev->suspend_mode == MHI_FAST_LINK_OFF);
mhi_dev->suspend_mode = MHI_ACTIVE_STATE;
rpm_resume_exit:
mutex_unlock(&mhi_cntrl->pm_mutex);
MHI_LOG("Exited with :%d\n", ret);
return (ret < 0) ? -EBUSY : 0;
}
static int mhi_system_resume(struct device *dev)
{
return mhi_runtime_resume(dev);
}
int mhi_system_suspend(struct device *dev)
{
struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
int ret;
MHI_LOG("Entered\n");
mutex_lock(&mhi_cntrl->pm_mutex);
if (!mhi_dev->powered_on) {
MHI_LOG("Not fully powered, return success\n");
mutex_unlock(&mhi_cntrl->pm_mutex);
return 0;
}
/*
* pci framework always makes a dummy vote to rpm
* framework to resume before calling system suspend
* hence usage count is minimum one
*/
if (atomic_read(&dev->power.usage_count) > 1) {
/*
* clients have requested to keep link on, try
* fast suspend. No need to notify clients since
* we will not be turning off the pcie link
*/
ret = mhi_pm_fast_suspend(mhi_cntrl, false);
mhi_dev->suspend_mode = MHI_FAST_LINK_ON;
} else {
/* if drv enable always do fast suspend */
if (mhi_dev->drv_supported) {
ret = mhi_pm_fast_suspend(mhi_cntrl, true);
mhi_dev->suspend_mode = MHI_FAST_LINK_OFF;
} else {
/* try normal suspend */
mhi_dev->suspend_mode = MHI_DEFAULT_SUSPEND;
ret = mhi_pm_suspend(mhi_cntrl);
/*
* normal suspend failed because we're busy, try
* fast suspend before aborting system suspend.
* this could happens if client has disabled
* device lpm but no active vote for PCIe from
* apps processor
*/
if (ret == -EBUSY) {
ret = mhi_pm_fast_suspend(mhi_cntrl, true);
mhi_dev->suspend_mode = MHI_FAST_LINK_ON;
}
}
}
if (ret) {
MHI_LOG("Abort due to ret:%d\n", ret);
mhi_dev->suspend_mode = MHI_ACTIVE_STATE;
goto exit_system_suspend;
}
ret = mhi_arch_link_suspend(mhi_cntrl);
/* failed suspending link abort mhi suspend */
if (ret) {
MHI_LOG("Failed to suspend link, abort suspend\n");
if (mhi_dev->suspend_mode == MHI_DEFAULT_SUSPEND)
mhi_pm_resume(mhi_cntrl);
else
mhi_pm_fast_resume(mhi_cntrl,
mhi_dev->suspend_mode == MHI_FAST_LINK_OFF);
mhi_dev->suspend_mode = MHI_ACTIVE_STATE;
}
exit_system_suspend:
mutex_unlock(&mhi_cntrl->pm_mutex);
MHI_LOG("Exit with ret:%d\n", ret);
return ret;
}
static int mhi_force_suspend(struct mhi_controller *mhi_cntrl)
{
int ret = -EIO;
const u32 delayms = 100;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
int itr = DIV_ROUND_UP(mhi_cntrl->timeout_ms, delayms);
MHI_LOG("Entered\n");
mutex_lock(&mhi_cntrl->pm_mutex);
for (; itr; itr--) {
/*
* This function get called soon as device entered mission mode
* so most of the channels are still in disabled state. However,
* sbl channels are active and clients could be trying to close
* channels while we trying to suspend the link. So, we need to
* re-try if MHI is busy
*/
ret = mhi_pm_suspend(mhi_cntrl);
if (!ret || ret != -EBUSY)
break;
MHI_LOG("MHI busy, sleeping and retry\n");
msleep(delayms);
}
if (ret)
goto exit_force_suspend;
mhi_dev->suspend_mode = MHI_DEFAULT_SUSPEND;
ret = mhi_arch_link_suspend(mhi_cntrl);
exit_force_suspend:
MHI_LOG("Force suspend ret with %d\n", ret);
mutex_unlock(&mhi_cntrl->pm_mutex);
return ret;
}
/* checks if link is down */
static int mhi_link_status(struct mhi_controller *mhi_cntrl, void *priv)
{
struct mhi_dev *mhi_dev = priv;
u16 dev_id;
int ret;
/* try reading device id, if dev id don't match, link is down */
ret = pci_read_config_word(mhi_dev->pci_dev, PCI_DEVICE_ID, &dev_id);
return (ret || dev_id != mhi_cntrl->dev_id) ? -EIO : 0;
}
/* disable PCIe L1 */
static int mhi_lpm_disable(struct mhi_controller *mhi_cntrl, void *priv)
{
struct mhi_dev *mhi_dev = priv;
struct pci_dev *pci_dev = mhi_dev->pci_dev;
int lnkctl = pci_dev->pcie_cap + PCI_EXP_LNKCTL;
u8 val;
unsigned long flags;
int ret = 0;
spin_lock_irqsave(&mhi_dev->lpm_lock, flags);
/* L1 is already disabled */
if (mhi_dev->lpm_disable_depth) {
mhi_dev->lpm_disable_depth++;
goto lpm_disable_exit;
}
ret = pci_read_config_byte(pci_dev, lnkctl, &val);
if (ret) {
MHI_ERR("Error reading LNKCTL, ret:%d\n", ret);
goto lpm_disable_exit;
}
/* L1 is not supported, do not increment lpm_disable_depth */
if (unlikely(!(val & PCI_EXP_LNKCTL_ASPM_L1)))
goto lpm_disable_exit;
val &= ~PCI_EXP_LNKCTL_ASPM_L1;
ret = pci_write_config_byte(pci_dev, lnkctl, val);
if (ret) {
MHI_ERR("Error writing LNKCTL to disable LPM, ret:%d\n", ret);
goto lpm_disable_exit;
}
mhi_dev->lpm_disable_depth++;
lpm_disable_exit:
spin_unlock_irqrestore(&mhi_dev->lpm_lock, flags);
return ret;
}
/* enable PCIe L1 */
static int mhi_lpm_enable(struct mhi_controller *mhi_cntrl, void *priv)
{
struct mhi_dev *mhi_dev = priv;
struct pci_dev *pci_dev = mhi_dev->pci_dev;
int lnkctl = pci_dev->pcie_cap + PCI_EXP_LNKCTL;
u8 val;
unsigned long flags;
int ret = 0;
spin_lock_irqsave(&mhi_dev->lpm_lock, flags);
/*
* Exit if L1 is not supported or is already disabled or
* decrementing lpm_disable_depth still keeps it above 0
*/
if (!mhi_dev->lpm_disable_depth)
goto lpm_enable_exit;
if (mhi_dev->lpm_disable_depth > 1) {
mhi_dev->lpm_disable_depth--;
goto lpm_enable_exit;
}
ret = pci_read_config_byte(pci_dev, lnkctl, &val);
if (ret) {
MHI_ERR("Error reading LNKCTL, ret:%d\n", ret);
goto lpm_enable_exit;
}
val |= PCI_EXP_LNKCTL_ASPM_L1;
ret = pci_write_config_byte(pci_dev, lnkctl, val);
if (ret) {
MHI_ERR("Error writing LNKCTL to enable LPM, ret:%d\n", ret);
goto lpm_enable_exit;
}
mhi_dev->lpm_disable_depth = 0;
lpm_enable_exit:
spin_unlock_irqrestore(&mhi_dev->lpm_lock, flags);
return ret;
}
void mhi_qcom_store_hwinfo(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
int i;
mhi_dev->serial_num = readl_relaxed(mhi_cntrl->bhi +
MHI_BHI_SERIAL_NUM_OFFS);
for (i = 0; i < ARRAY_SIZE(mhi_dev->oem_pk_hash); i++)
mhi_dev->oem_pk_hash[i] = readl_relaxed(mhi_cntrl->bhi +
MHI_BHI_OEMPKHASH(i));
}
static int mhi_qcom_power_up(struct mhi_controller *mhi_cntrl)
{
enum mhi_dev_state dev_state = mhi_get_mhi_state(mhi_cntrl);
const u32 delayus = 10;
int itr = DIV_ROUND_UP(mhi_cntrl->timeout_ms * 1000, delayus);
int ret;
/*
* It's possible device did not go thru a cold reset before
* power up and still in error state. If device in error state,
* we need to trigger a soft reset before continue with power
* up
*/
if (dev_state == MHI_STATE_SYS_ERR) {
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
while (itr--) {
dev_state = mhi_get_mhi_state(mhi_cntrl);
if (dev_state != MHI_STATE_SYS_ERR)
break;
usleep_range(delayus, delayus << 1);
}
/* device still in error state, abort power up */
if (dev_state == MHI_STATE_SYS_ERR)
return -EIO;
}
/* when coming out of SSR, initial states are not valid */
mhi_cntrl->ee = 0;
mhi_cntrl->power_down = false;
ret = mhi_arch_power_up(mhi_cntrl);
if (ret)
return ret;
ret = mhi_async_power_up(mhi_cntrl);
/* Update modem serial Info */
if (!ret)
mhi_qcom_store_hwinfo(mhi_cntrl);
/* power up create the dentry */
if (mhi_cntrl->dentry) {
debugfs_create_file("m0", 0444, mhi_cntrl->dentry, mhi_cntrl,
&debugfs_trigger_m0_fops);
debugfs_create_file("m3", 0444, mhi_cntrl->dentry, mhi_cntrl,
&debugfs_trigger_m3_fops);
}
return ret;
}
static int mhi_runtime_get(struct mhi_controller *mhi_cntrl, void *priv)
{
struct mhi_dev *mhi_dev = priv;
struct device *dev = &mhi_dev->pci_dev->dev;
return pm_runtime_get(dev);
}
static void mhi_runtime_put(struct mhi_controller *mhi_cntrl, void *priv)
{
struct mhi_dev *mhi_dev = priv;
struct device *dev = &mhi_dev->pci_dev->dev;
pm_runtime_put_noidle(dev);
}
static void mhi_status_cb(struct mhi_controller *mhi_cntrl,
void *priv,
enum MHI_CB reason)
{
struct mhi_dev *mhi_dev = priv;
struct device *dev = &mhi_dev->pci_dev->dev;
int ret;
switch (reason) {
case MHI_CB_IDLE:
MHI_LOG("Schedule runtime suspend\n");
pm_runtime_mark_last_busy(dev);
pm_request_autosuspend(dev);
break;
case MHI_CB_EE_MISSION_MODE:
/*
* we need to force a suspend so device can switch to
* mission mode pcie phy settings.
*/
pm_runtime_get(dev);
ret = mhi_force_suspend(mhi_cntrl);
if (!ret)
mhi_runtime_resume(dev);
pm_runtime_put(dev);
break;
default:
MHI_ERR("Unhandled cb:0x%x\n", reason);
}
}
/* capture host SoC XO time in ticks */
static u64 mhi_time_get(struct mhi_controller *mhi_cntrl, void *priv)
{
return arch_counter_get_cntvct();
}
static ssize_t timeout_ms_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
/* buffer provided by sysfs has a minimum size of PAGE_SIZE */
return snprintf(buf, PAGE_SIZE, "%u\n", mhi_cntrl->timeout_ms);
}
static ssize_t timeout_ms_store(struct device *dev,
struct device_attribute *attr,
const char *buf,
size_t count)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
u32 timeout_ms;
if (kstrtou32(buf, 0, &timeout_ms) < 0)
return -EINVAL;
mhi_cntrl->timeout_ms = timeout_ms;
return count;
}
static DEVICE_ATTR_RW(timeout_ms);
static ssize_t power_up_store(struct device *dev,
struct device_attribute *attr,
const char *buf,
size_t count)
{
int ret;
struct mhi_device *mhi_dev = to_mhi_device(dev);
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
ret = mhi_qcom_power_up(mhi_cntrl);
if (ret)
return ret;
return count;
}
static DEVICE_ATTR_WO(power_up);
static ssize_t serial_info_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct mhi_device *mhi_device = to_mhi_device(dev);
struct mhi_controller *mhi_cntrl = mhi_device->mhi_cntrl;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
int n;
n = scnprintf(buf, PAGE_SIZE, "Serial Number:%u\n",
mhi_dev->serial_num);
return n;
}
static DEVICE_ATTR_RO(serial_info);
static ssize_t oempkhash_info_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct mhi_device *mhi_device = to_mhi_device(dev);
struct mhi_controller *mhi_cntrl = mhi_device->mhi_cntrl;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
int i, n = 0;
for (i = 0; i < ARRAY_SIZE(mhi_dev->oem_pk_hash); i++)
n += scnprintf(buf + n, PAGE_SIZE - n, "OEMPKHASH[%d]:%u\n",
i, mhi_dev->oem_pk_hash[i]);
return n;
}
static DEVICE_ATTR_RO(oempkhash_info);
static struct attribute *mhi_qcom_attrs[] = {
&dev_attr_timeout_ms.attr,
&dev_attr_power_up.attr,
&dev_attr_serial_info.attr,
&dev_attr_oempkhash_info.attr,
NULL
};
static const struct attribute_group mhi_qcom_group = {
.attrs = mhi_qcom_attrs,
};
static struct mhi_controller *mhi_register_controller(struct pci_dev *pci_dev)
{
struct mhi_controller *mhi_cntrl;
struct mhi_dev *mhi_dev;
struct device_node *of_node = pci_dev->dev.of_node;
const struct firmware_info *firmware_info;
bool use_s1;
u32 addr_win[2];
const char *iommu_dma_type;
int ret, i;
if (!of_node)
return ERR_PTR(-ENODEV);
mhi_cntrl = mhi_alloc_controller(sizeof(*mhi_dev));
if (!mhi_cntrl)
return ERR_PTR(-ENOMEM);
mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
mhi_cntrl->domain = pci_domain_nr(pci_dev->bus);
mhi_cntrl->dev_id = pci_dev->device;
mhi_cntrl->bus = pci_dev->bus->number;
mhi_cntrl->slot = PCI_SLOT(pci_dev->devfn);
mhi_cntrl->of_node = of_node;
mhi_cntrl->iova_start = memblock_start_of_DRAM();
mhi_cntrl->iova_stop = memblock_end_of_DRAM();
of_node = of_parse_phandle(mhi_cntrl->of_node, "qcom,iommu-group", 0);
if (of_node) {
use_s1 = true;
/*
* s1 translation can be in bypass or fastmap mode
* if "qcom,iommu-dma" property is missing, we assume s1 is
* enabled and in default (no fastmap/atomic) mode
*/
ret = of_property_read_string(of_node, "qcom,iommu-dma",
&iommu_dma_type);
if (!ret && !strcmp("bypass", iommu_dma_type))
use_s1 = false;
/*
* if s1 translation enabled pull iova addr from dt using
* iommu-dma-addr-pool property specified addresses
*/
if (use_s1) {
ret = of_property_read_u32_array(of_node,
"qcom,iommu-dma-addr-pool",
addr_win, 2);
if (ret) {
of_node_put(of_node);
return ERR_PTR(-EINVAL);
}
/*
* If S1 is enabled, set MHI_CTRL start address to 0
* so we can use low level mapping api to map buffers
* outside of smmu domain
*/
mhi_cntrl->iova_start = 0;
mhi_cntrl->iova_stop = addr_win[0] + addr_win[1];
}
of_node_put(of_node);
}
mhi_dev->pci_dev = pci_dev;
spin_lock_init(&mhi_dev->lpm_lock);
/* setup power management apis */
mhi_cntrl->status_cb = mhi_status_cb;
mhi_cntrl->runtime_get = mhi_runtime_get;
mhi_cntrl->runtime_put = mhi_runtime_put;
mhi_cntrl->link_status = mhi_link_status;
mhi_cntrl->lpm_disable = mhi_lpm_disable;
mhi_cntrl->lpm_enable = mhi_lpm_enable;
mhi_cntrl->time_get = mhi_time_get;
mhi_cntrl->remote_timer_freq = 19200000;
ret = of_register_mhi_controller(mhi_cntrl);
if (ret)
goto error_register;
for (i = 0; i < ARRAY_SIZE(firmware_table); i++) {
firmware_info = firmware_table + i;
/* debug mode always use default */
if (!debug_mode && mhi_cntrl->dev_id == firmware_info->dev_id)
break;
}
mhi_cntrl->fw_image = firmware_info->fw_image;
mhi_cntrl->edl_image = firmware_info->edl_image;
if (sysfs_create_group(&mhi_cntrl->mhi_dev->dev.kobj, &mhi_qcom_group))
MHI_ERR("Error while creating the sysfs group\n");
return mhi_cntrl;
error_register:
mhi_free_controller(mhi_cntrl);
return ERR_PTR(-EINVAL);
}
int mhi_pci_probe(struct pci_dev *pci_dev,
const struct pci_device_id *device_id)
{
struct mhi_controller *mhi_cntrl;
u32 domain = pci_domain_nr(pci_dev->bus);
u32 bus = pci_dev->bus->number;
u32 dev_id = pci_dev->device;
u32 slot = PCI_SLOT(pci_dev->devfn);
struct mhi_dev *mhi_dev;
int ret;
/* see if we already registered */
mhi_cntrl = mhi_bdf_to_controller(domain, bus, slot, dev_id);
if (!mhi_cntrl)
mhi_cntrl = mhi_register_controller(pci_dev);
if (IS_ERR(mhi_cntrl))
return PTR_ERR(mhi_cntrl);
mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
mhi_dev->powered_on = true;
ret = mhi_arch_pcie_init(mhi_cntrl);
if (ret)
return ret;
mhi_cntrl->dev = &mhi_dev->pci_dev->dev;
ret = dma_set_mask_and_coherent(mhi_cntrl->dev, DMA_BIT_MASK(64));
if (ret)
goto error_pci_probe;
ret = mhi_init_pci_dev(mhi_cntrl);
if (ret)
goto error_pci_probe;
/* start power up sequence */
if (!debug_mode) {
ret = mhi_qcom_power_up(mhi_cntrl);
if (ret)
goto error_power_up;
}
pm_runtime_mark_last_busy(&pci_dev->dev);
MHI_LOG("Return successful\n");
return 0;
error_power_up:
mhi_deinit_pci_dev(mhi_cntrl);
error_pci_probe:
mhi_arch_pcie_deinit(mhi_cntrl);
return ret;
}
static const struct dev_pm_ops pm_ops = {
SET_RUNTIME_PM_OPS(mhi_runtime_suspend,
mhi_runtime_resume,
mhi_runtime_idle)
SET_SYSTEM_SLEEP_PM_OPS(mhi_system_suspend, mhi_system_resume)
};
static struct pci_device_id mhi_pcie_device_id[] = {
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0300)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0301)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0302)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0303)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0304)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0305)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0306)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, MHI_PCIE_DEBUG_ID)},
{0},
};
static struct pci_driver mhi_pcie_driver = {
.name = "mhi",
.id_table = mhi_pcie_device_id,
.probe = mhi_pci_probe,
.driver = {
.pm = &pm_ops
}
};
module_pci_driver(mhi_pcie_driver);
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("MHI_CORE");
MODULE_DESCRIPTION("MHI Host Driver");

View File

@ -0,0 +1,105 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.*/
#ifndef _MHI_QCOM_
#define _MHI_QCOM_
/* iova cfg bitmask */
#define MHI_SMMU_ATTACH BIT(0)
#define MHI_SMMU_S1_BYPASS BIT(1)
#define MHI_SMMU_FAST BIT(2)
#define MHI_SMMU_ATOMIC BIT(3)
#define MHI_SMMU_FORCE_COHERENT BIT(4)
#define MHI_PCIE_VENDOR_ID (0x17cb)
#define MHI_PCIE_DEBUG_ID (0xffff)
#define MHI_BHI_SERIAL_NUM_OFFS (0x40)
#define MHI_BHI_OEMPKHASH(n) (0x64 + (0x4 * (n)))
#define MHI_BHI_OEMPKHASH_SEG (16)
/* runtime suspend timer */
#define MHI_RPM_SUSPEND_TMR_MS (250)
#define MHI_PCI_BAR_NUM (0)
/* timesync time calculations */
#define REMOTE_TICKS_TO_US(x) (div_u64((x) * 100ULL, \
div_u64(mhi_cntrl->remote_timer_freq, 10000ULL)))
#define REMOTE_TICKS_TO_SEC(x) (div_u64((x), \
mhi_cntrl->remote_timer_freq))
#define REMOTE_TIME_REMAINDER_US(x) (REMOTE_TICKS_TO_US((x)) % \
(REMOTE_TICKS_TO_SEC((x)) * 1000000ULL))
extern const char * const mhi_ee_str[MHI_EE_MAX];
#define TO_MHI_EXEC_STR(ee) (ee >= MHI_EE_MAX ? "INVALID_EE" : mhi_ee_str[ee])
enum mhi_suspend_mode {
MHI_ACTIVE_STATE,
MHI_DEFAULT_SUSPEND,
MHI_FAST_LINK_OFF,
MHI_FAST_LINK_ON,
};
#define MHI_IS_SUSPENDED(mode) (mode)
struct mhi_dev {
struct pci_dev *pci_dev;
bool drv_supported;
int resn;
void *arch_info;
bool powered_on;
bool mdm_state;
dma_addr_t iova_start;
dma_addr_t iova_stop;
enum mhi_suspend_mode suspend_mode;
/* hardware info */
u32 serial_num;
u32 oem_pk_hash[MHI_BHI_OEMPKHASH_SEG];
unsigned int lpm_disable_depth;
/* lock to toggle low power modes */
spinlock_t lpm_lock;
};
void mhi_deinit_pci_dev(struct mhi_controller *mhi_cntrl);
int mhi_pci_probe(struct pci_dev *pci_dev,
const struct pci_device_id *device_id);
#ifdef CONFIG_ARCH_QCOM
int mhi_arch_power_up(struct mhi_controller *mhi_cntrl);
int mhi_arch_pcie_init(struct mhi_controller *mhi_cntrl);
void mhi_arch_pcie_deinit(struct mhi_controller *mhi_cntrl);
int mhi_arch_link_suspend(struct mhi_controller *mhi_cntrl);
int mhi_arch_link_resume(struct mhi_controller *mhi_cntrl);
#else
static inline int mhi_arch_pcie_init(struct mhi_controller *mhi_cntrl)
{
return 0;
}
static inline void mhi_arch_pcie_deinit(struct mhi_controller *mhi_cntrl)
{
}
static inline int mhi_arch_link_suspend(struct mhi_controller *mhi_cntrl)
{
return 0;
}
static inline int mhi_arch_link_resume(struct mhi_controller *mhi_cntrl)
{
return 0;
}
static inline int mhi_arch_power_up(struct mhi_controller *mhi_cntrl)
{
return 0;
}
#endif
#endif /* _MHI_QCOM_ */

View File

@ -0,0 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-$(CONFIG_MHI_BUS) += mhi_bus.o
mhi_bus-y := mhi_init.o mhi_main.o mhi_pm.o mhi_boot.o mhi_dtr.o

View File

@ -0,0 +1,668 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved. */
#include <linux/debugfs.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
#include <linux/firmware.h>
#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/of.h>
#include <linux/module.h>
#include <linux/random.h>
#include <linux/slab.h>
#include <linux/wait.h>
#include <linux/mhi.h>
#include "mhi_internal.h"
static void mhi_process_sfr(struct mhi_controller *mhi_cntrl,
struct file_info *info)
{
struct mhi_buf *mhi_buf = mhi_cntrl->rddm_image->mhi_buf;
u8 *sfr_buf, *file_offset = info->file_offset;
u32 file_size = info->file_size;
u32 rem_seg_len = info->rem_seg_len;
u32 seg_idx = info->seg_idx;
sfr_buf = kzalloc(file_size + 1, GFP_KERNEL);
if (!sfr_buf)
return;
while (file_size) {
/* file offset starting from seg base */
if (!rem_seg_len) {
file_offset = mhi_buf[seg_idx].buf;
if (file_size > mhi_buf[seg_idx].len)
rem_seg_len = mhi_buf[seg_idx].len;
else
rem_seg_len = file_size;
}
if (file_size <= rem_seg_len) {
memcpy(sfr_buf, file_offset, file_size);
break;
}
memcpy(sfr_buf, file_offset, rem_seg_len);
sfr_buf += rem_seg_len;
file_size -= rem_seg_len;
rem_seg_len = 0;
seg_idx++;
if (seg_idx == mhi_cntrl->rddm_image->entries) {
MHI_ERR("invalid size for SFR file\n");
goto err;
}
}
sfr_buf[info->file_size] = '\0';
/* force sfr string to log in kernel msg */
MHI_ERR("%s\n", sfr_buf);
err:
kfree(sfr_buf);
}
static int mhi_find_next_file_offset(struct mhi_controller *mhi_cntrl,
struct file_info *info, struct rddm_table_info *table_info)
{
struct mhi_buf *mhi_buf = mhi_cntrl->rddm_image->mhi_buf;
if (info->rem_seg_len >= table_info->size) {
info->file_offset += table_info->size;
info->rem_seg_len -= table_info->size;
return 0;
}
info->file_size = table_info->size - info->rem_seg_len;
info->rem_seg_len = 0;
/* iterate over segments until eof is reached */
while (info->file_size) {
info->seg_idx++;
if (info->seg_idx == mhi_cntrl->rddm_image->entries) {
MHI_ERR("invalid size for file %s\n",
table_info->file_name);
return -EINVAL;
}
if (info->file_size > mhi_buf[info->seg_idx].len) {
info->file_size -= mhi_buf[info->seg_idx].len;
} else {
info->file_offset = mhi_buf[info->seg_idx].buf +
info->file_size;
info->rem_seg_len = mhi_buf[info->seg_idx].len -
info->file_size;
info->file_size = 0;
}
}
return 0;
}
void mhi_dump_sfr(struct mhi_controller *mhi_cntrl)
{
struct mhi_buf *mhi_buf = mhi_cntrl->rddm_image->mhi_buf;
struct rddm_header *rddm_header =
(struct rddm_header *)mhi_buf->buf;
struct rddm_table_info *table_info;
struct file_info info;
u32 table_size, n;
memset(&info, 0, sizeof(info));
if (rddm_header->header_size > sizeof(*rddm_header) ||
rddm_header->header_size < 8) {
MHI_ERR("invalid reported header size %u\n",
rddm_header->header_size);
return;
}
table_size = (rddm_header->header_size - 8) / sizeof(*table_info);
if (!table_size) {
MHI_ERR("invalid rddm table size %u\n", table_size);
return;
}
info.file_offset = (u8 *)rddm_header + rddm_header->header_size;
info.rem_seg_len = mhi_buf[0].len - rddm_header->header_size;
for (n = 0; n < table_size; n++) {
table_info = &rddm_header->table_info[n];
if (!strcmp(table_info->file_name, "Q6-SFR.bin")) {
info.file_size = table_info->size;
mhi_process_sfr(mhi_cntrl, &info);
return;
}
if (mhi_find_next_file_offset(mhi_cntrl, &info, table_info))
return;
}
}
EXPORT_SYMBOL(mhi_dump_sfr);
/* setup rddm vector table for rddm transfer and program rxvec */
void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info)
{
struct mhi_buf *mhi_buf = img_info->mhi_buf;
struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
void __iomem *base = mhi_cntrl->bhie;
u32 sequence_id;
int i = 0;
for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) {
MHI_VERB("Setting vector:%pad size:%zu\n",
&mhi_buf->dma_addr, mhi_buf->len);
bhi_vec->dma_addr = mhi_buf->dma_addr;
bhi_vec->size = mhi_buf->len;
}
MHI_LOG("BHIe programming for RDDM\n");
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_HIGH_OFFS,
upper_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_LOW_OFFS,
lower_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
sequence_id = prandom_u32() & BHIE_RXVECSTATUS_SEQNUM_BMSK;
if (unlikely(!sequence_id))
sequence_id = 1;
mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
sequence_id);
MHI_LOG("address:%pad len:0x%lx sequence:%u\n",
&mhi_buf->dma_addr, mhi_buf->len, sequence_id);
}
/* collect rddm during kernel panic */
static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
{
int ret;
u32 rx_status;
enum mhi_ee ee;
const u32 delayus = 5000;
u32 retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
const u32 rddm_timeout_us = 200000;
int rddm_retry = rddm_timeout_us / delayus; /* time to enter rddm */
void __iomem *base = mhi_cntrl->bhie;
MHI_LOG("Entered with pm_state:%s dev_state:%s ee:%s\n",
to_mhi_pm_state_str(mhi_cntrl->pm_state),
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
TO_MHI_EXEC_STR(mhi_cntrl->ee));
/*
* This should only be executing during a kernel panic, we expect all
* other cores to shutdown while we're collecting rddm buffer. After
* returning from this function, we expect device to reset.
*
* Normaly, we would read/write pm_state only after grabbing
* pm_lock, since we're in a panic, skipping it. Also there is no
* gurantee this state change would take effect since
* we're setting it w/o grabbing pmlock, it's best effort
*/
mhi_cntrl->pm_state = MHI_PM_LD_ERR_FATAL_DETECT;
/* update should take the effect immediately */
smp_wmb();
/*
* Make sure device is not already in RDDM.
* In case device asserts and a kernel panic follows, device will
* already be in RDDM. Do not trigger SYS ERR again and proceed with
* waiting for image download completion.
*/
ee = mhi_get_exec_env(mhi_cntrl);
if (ee != MHI_EE_RDDM) {
MHI_LOG("Trigger device into RDDM mode using SYSERR\n");
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
MHI_LOG("Waiting for device to enter RDDM\n");
while (rddm_retry--) {
ee = mhi_get_exec_env(mhi_cntrl);
if (ee == MHI_EE_RDDM)
break;
udelay(delayus);
}
if (rddm_retry <= 0) {
/* Hardware reset; force device to enter rddm */
MHI_LOG(
"Did not enter RDDM, do a host req. reset\n");
mhi_write_reg(mhi_cntrl, mhi_cntrl->regs,
MHI_SOC_RESET_REQ_OFFSET,
MHI_SOC_RESET_REQ);
udelay(delayus);
}
ee = mhi_get_exec_env(mhi_cntrl);
}
MHI_LOG("Waiting for image download completion, current EE:%s\n",
TO_MHI_EXEC_STR(ee));
while (retry--) {
ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS,
BHIE_RXVECSTATUS_STATUS_BMSK,
BHIE_RXVECSTATUS_STATUS_SHFT,
&rx_status);
if (ret)
return -EIO;
if (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) {
MHI_LOG("RDDM successfully collected\n");
return 0;
}
udelay(delayus);
}
ee = mhi_get_exec_env(mhi_cntrl);
ret = mhi_read_reg(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS, &rx_status);
MHI_ERR("Did not complete RDDM transfer\n");
MHI_ERR("Current EE:%s\n", TO_MHI_EXEC_STR(ee));
MHI_ERR("RXVEC_STATUS:0x%x, ret:%d\n", rx_status, ret);
return -EIO;
}
/* download ramdump image from device */
int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic)
{
void __iomem *base = mhi_cntrl->bhie;
u32 rx_status;
if (in_panic)
return __mhi_download_rddm_in_panic(mhi_cntrl);
MHI_LOG("Waiting for image download completion\n");
/* waiting for image download completion */
wait_event_timeout(mhi_cntrl->state_event,
mhi_read_reg_field(mhi_cntrl, base,
BHIE_RXVECSTATUS_OFFS,
BHIE_RXVECSTATUS_STATUS_BMSK,
BHIE_RXVECSTATUS_STATUS_SHFT,
&rx_status) || rx_status,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
return (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
}
EXPORT_SYMBOL(mhi_download_rddm_img);
static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
const struct mhi_buf *mhi_buf)
{
void __iomem *base = mhi_cntrl->bhie;
rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
u32 tx_status;
read_lock_bh(pm_lock);
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
read_unlock_bh(pm_lock);
return -EIO;
}
MHI_LOG("Starting BHIe Programming\n");
mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_HIGH_OFFS,
upper_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_LOW_OFFS,
lower_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
mhi_cntrl->sequence_id = prandom_u32() & BHIE_TXVECSTATUS_SEQNUM_BMSK;
mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT,
mhi_cntrl->sequence_id);
read_unlock_bh(pm_lock);
MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
upper_32_bits(mhi_buf->dma_addr),
lower_32_bits(mhi_buf->dma_addr),
mhi_buf->len, mhi_cntrl->sequence_id);
MHI_LOG("Waiting for image transfer completion\n");
/* waiting for image download completion */
wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
mhi_read_reg_field(mhi_cntrl, base,
BHIE_TXVECSTATUS_OFFS,
BHIE_TXVECSTATUS_STATUS_BMSK,
BHIE_TXVECSTATUS_STATUS_SHFT,
&tx_status) || tx_status,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
return -EIO;
return (tx_status == BHIE_TXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
}
static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
dma_addr_t dma_addr,
size_t size)
{
u32 tx_status, val;
int i, ret;
void __iomem *base = mhi_cntrl->bhi;
rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
struct {
char *name;
u32 offset;
} error_reg[] = {
{ "ERROR_CODE", BHI_ERRCODE },
{ "ERROR_DBG1", BHI_ERRDBG1 },
{ "ERROR_DBG2", BHI_ERRDBG2 },
{ "ERROR_DBG3", BHI_ERRDBG3 },
{ NULL },
};
MHI_LOG("Starting BHI programming\n");
/* program start sbl download via bhi protocol */
read_lock_bh(pm_lock);
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
read_unlock_bh(pm_lock);
goto invalid_pm_state;
}
mhi_write_reg(mhi_cntrl, base, BHI_STATUS, 0);
mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH,
upper_32_bits(dma_addr));
mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW,
lower_32_bits(dma_addr));
mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, size);
mhi_cntrl->session_id = prandom_u32() & BHI_TXDB_SEQNUM_BMSK;
mhi_write_reg(mhi_cntrl, base, BHI_IMGTXDB, mhi_cntrl->session_id);
read_unlock_bh(pm_lock);
MHI_LOG("Waiting for image transfer completion\n");
/* waiting for image download completion */
wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
mhi_read_reg_field(mhi_cntrl, base, BHI_STATUS,
BHI_STATUS_MASK, BHI_STATUS_SHIFT,
&tx_status) || tx_status,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
goto invalid_pm_state;
if (tx_status == BHI_STATUS_ERROR) {
MHI_ERR("Image transfer failed\n");
read_lock_bh(pm_lock);
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
for (i = 0; error_reg[i].name; i++) {
ret = mhi_read_reg(mhi_cntrl, base,
error_reg[i].offset, &val);
if (ret)
break;
MHI_ERR("reg:%s value:0x%x\n",
error_reg[i].name, val);
}
}
read_unlock_bh(pm_lock);
goto invalid_pm_state;
}
return (tx_status == BHI_STATUS_SUCCESS) ? 0 : -ETIMEDOUT;
invalid_pm_state:
return -EIO;
}
void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info *image_info)
{
int i;
struct mhi_buf *mhi_buf = image_info->mhi_buf;
for (i = 0; i < image_info->entries; i++, mhi_buf++)
mhi_free_contig_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
mhi_buf->dma_addr);
kfree(image_info->mhi_buf);
kfree(image_info);
}
int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info **image_info,
size_t alloc_size)
{
size_t seg_size = mhi_cntrl->seg_len;
/* requier additional entry for vec table */
int segments = DIV_ROUND_UP(alloc_size, seg_size) + 1;
int i;
struct image_info *img_info;
struct mhi_buf *mhi_buf;
MHI_LOG("Allocating bytes:%zu seg_size:%zu total_seg:%u\n",
alloc_size, seg_size, segments);
img_info = kzalloc(sizeof(*img_info), GFP_KERNEL);
if (!img_info)
return -ENOMEM;
/* allocate memory for entries */
img_info->mhi_buf = kcalloc(segments, sizeof(*img_info->mhi_buf),
GFP_KERNEL);
if (!img_info->mhi_buf)
goto error_alloc_mhi_buf;
/* allocate and populate vector table */
mhi_buf = img_info->mhi_buf;
for (i = 0; i < segments; i++, mhi_buf++) {
size_t vec_size = seg_size;
/* last entry is for vector table */
if (i == segments - 1)
vec_size = sizeof(struct bhi_vec_entry) * i;
mhi_buf->len = vec_size;
mhi_buf->buf = mhi_alloc_contig_coherent(mhi_cntrl, vec_size,
&mhi_buf->dma_addr, GFP_KERNEL);
if (!mhi_buf->buf)
goto error_alloc_segment;
MHI_LOG("Entry:%d Address:0x%llx size:%lu\n", i,
mhi_buf->dma_addr, mhi_buf->len);
}
img_info->bhi_vec = img_info->mhi_buf[segments - 1].buf;
img_info->entries = segments;
*image_info = img_info;
MHI_LOG("Successfully allocated bhi vec table\n");
return 0;
error_alloc_segment:
for (--i, --mhi_buf; i >= 0; i--, mhi_buf--)
mhi_free_contig_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
mhi_buf->dma_addr);
error_alloc_mhi_buf:
kfree(img_info);
return -ENOMEM;
}
static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
const struct firmware *firmware,
struct image_info *img_info)
{
size_t remainder = firmware->size;
size_t to_cpy;
const u8 *buf = firmware->data;
int i = 0;
struct mhi_buf *mhi_buf = img_info->mhi_buf;
struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
while (remainder) {
MHI_ASSERT(i >= img_info->entries, "malformed vector table");
to_cpy = min(remainder, mhi_buf->len);
memcpy(mhi_buf->buf, buf, to_cpy);
bhi_vec->dma_addr = mhi_buf->dma_addr;
bhi_vec->size = to_cpy;
MHI_VERB("Setting Vector:0x%llx size: %llu\n",
bhi_vec->dma_addr, bhi_vec->size);
buf += to_cpy;
remainder -= to_cpy;
i++;
bhi_vec++;
mhi_buf++;
}
}
void mhi_fw_load_worker(struct work_struct *work)
{
int ret;
struct mhi_controller *mhi_cntrl;
const char *fw_name;
const struct firmware *firmware = NULL;
struct image_info *image_info;
void *buf;
dma_addr_t dma_addr;
size_t size;
mhi_cntrl = container_of(work, struct mhi_controller, fw_worker);
MHI_LOG("Waiting for device to enter PBL from EE:%s\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee));
ret = wait_event_timeout(mhi_cntrl->state_event,
MHI_IN_PBL(mhi_cntrl->ee) ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
MHI_ERR("MHI is not in valid state\n");
return;
}
MHI_LOG("Device current EE:%s\n", TO_MHI_EXEC_STR(mhi_cntrl->ee));
/* if device in pthru, do reset to ready state transition */
if (mhi_cntrl->ee == MHI_EE_PTHRU)
goto fw_load_ee_pthru;
fw_name = (mhi_cntrl->ee == MHI_EE_EDL) ?
mhi_cntrl->edl_image : mhi_cntrl->fw_image;
if (!fw_name || (mhi_cntrl->fbc_download && (!mhi_cntrl->sbl_size ||
!mhi_cntrl->seg_len))) {
MHI_ERR("No firmware image defined or !sbl_size || !seg_len\n");
return;
}
ret = request_firmware(&firmware, fw_name, mhi_cntrl->dev);
if (ret) {
MHI_ERR("Error loading firmware, ret:%d\n", ret);
return;
}
size = (mhi_cntrl->fbc_download) ? mhi_cntrl->sbl_size : firmware->size;
/* the sbl size provided is maximum size, not necessarily image size */
if (size > firmware->size)
size = firmware->size;
buf = mhi_alloc_coherent(mhi_cntrl, size, &dma_addr, GFP_KERNEL);
if (!buf) {
MHI_ERR("Could not allocate memory for image\n");
release_firmware(firmware);
return;
}
/* load sbl image */
memcpy(buf, firmware->data, size);
ret = mhi_fw_load_sbl(mhi_cntrl, dma_addr, size);
mhi_free_coherent(mhi_cntrl, size, buf, dma_addr);
if (!mhi_cntrl->fbc_download || ret || mhi_cntrl->ee == MHI_EE_EDL)
release_firmware(firmware);
/* error or in edl, we're done */
if (ret || mhi_cntrl->ee == MHI_EE_EDL)
return;
write_lock_irq(&mhi_cntrl->pm_lock);
mhi_cntrl->dev_state = MHI_STATE_RESET;
write_unlock_irq(&mhi_cntrl->pm_lock);
/*
* if we're doing fbc, populate vector tables while
* device transitioning into MHI READY state
*/
if (mhi_cntrl->fbc_download) {
ret = mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->fbc_image,
firmware->size);
if (ret) {
MHI_ERR("Error alloc size of %zu\n", firmware->size);
goto error_alloc_fw_table;
}
MHI_LOG("Copying firmware image into vector table\n");
/* load the firmware into BHIE vec table */
mhi_firmware_copy(mhi_cntrl, firmware, mhi_cntrl->fbc_image);
}
fw_load_ee_pthru:
/* transitioning into MHI RESET->READY state */
ret = mhi_ready_state_transition(mhi_cntrl);
MHI_LOG("To Reset->Ready PM_STATE:%s MHI_STATE:%s EE:%s, ret:%d\n",
to_mhi_pm_state_str(mhi_cntrl->pm_state),
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
TO_MHI_EXEC_STR(mhi_cntrl->ee), ret);
if (!mhi_cntrl->fbc_download)
return;
if (ret) {
MHI_ERR("Did not transition to READY state\n");
goto error_read;
}
/* wait for SBL event */
ret = wait_event_timeout(mhi_cntrl->state_event,
mhi_cntrl->ee == MHI_EE_SBL ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
MHI_ERR("MHI did not enter BHIE\n");
goto error_read;
}
/* start full firmware image download */
image_info = mhi_cntrl->fbc_image;
ret = mhi_fw_load_amss(mhi_cntrl,
/* last entry is vec table */
&image_info->mhi_buf[image_info->entries - 1]);
MHI_LOG("amss fw_load, ret:%d\n", ret);
release_firmware(firmware);
return;
error_read:
mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
mhi_cntrl->fbc_image = NULL;
error_alloc_fw_table:
release_firmware(firmware);
}

View File

@ -0,0 +1,227 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.*/
#include <linux/debugfs.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/of.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/termios.h>
#include <linux/wait.h>
#include <linux/mhi.h>
#include "mhi_internal.h"
struct dtr_ctrl_msg {
u32 preamble;
u32 msg_id;
u32 dest_id;
u32 size;
u32 msg;
} __packed;
#define CTRL_MAGIC (0x4C525443)
#define CTRL_MSG_DTR BIT(0)
#define CTRL_MSG_RTS BIT(1)
#define CTRL_MSG_DCD BIT(0)
#define CTRL_MSG_DSR BIT(1)
#define CTRL_MSG_RI BIT(3)
#define CTRL_HOST_STATE (0x10)
#define CTRL_DEVICE_STATE (0x11)
#define CTRL_GET_CHID(dtr) (dtr->dest_id & 0xFF)
static int mhi_dtr_tiocmset(struct mhi_controller *mhi_cntrl,
struct mhi_device *mhi_dev,
u32 tiocm)
{
struct dtr_ctrl_msg *dtr_msg = NULL;
struct mhi_chan *dtr_chan = mhi_cntrl->dtr_dev->ul_chan;
spinlock_t *res_lock = &mhi_dev->dev.devres_lock;
u32 cur_tiocm;
int ret = 0;
cur_tiocm = mhi_dev->tiocm & ~(TIOCM_CD | TIOCM_DSR | TIOCM_RI);
tiocm &= (TIOCM_DTR | TIOCM_RTS);
/* state did not changed */
if (cur_tiocm == tiocm)
return 0;
mutex_lock(&dtr_chan->mutex);
dtr_msg = kzalloc(sizeof(*dtr_msg), GFP_KERNEL);
if (!dtr_msg) {
ret = -ENOMEM;
goto tiocm_exit;
}
dtr_msg->preamble = CTRL_MAGIC;
dtr_msg->msg_id = CTRL_HOST_STATE;
dtr_msg->dest_id = mhi_dev->ul_chan_id;
dtr_msg->size = sizeof(u32);
if (tiocm & TIOCM_DTR)
dtr_msg->msg |= CTRL_MSG_DTR;
if (tiocm & TIOCM_RTS)
dtr_msg->msg |= CTRL_MSG_RTS;
reinit_completion(&dtr_chan->completion);
ret = mhi_queue_transfer(mhi_cntrl->dtr_dev, DMA_TO_DEVICE, dtr_msg,
sizeof(*dtr_msg), MHI_EOT);
if (ret)
goto tiocm_exit;
ret = wait_for_completion_timeout(&dtr_chan->completion,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret) {
MHI_ERR("Failed to receive transfer callback\n");
ret = -EIO;
goto tiocm_exit;
}
ret = 0;
spin_lock_irq(res_lock);
mhi_dev->tiocm &= ~(TIOCM_DTR | TIOCM_RTS);
mhi_dev->tiocm |= tiocm;
spin_unlock_irq(res_lock);
tiocm_exit:
kfree(dtr_msg);
mutex_unlock(&dtr_chan->mutex);
return ret;
}
long mhi_ioctl(struct mhi_device *mhi_dev, unsigned int cmd, unsigned long arg)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
int ret;
/* ioctl not supported by this controller */
if (!mhi_cntrl->dtr_dev)
return -EIO;
switch (cmd) {
case TIOCMGET:
return mhi_dev->tiocm;
case TIOCMSET:
{
u32 tiocm;
ret = get_user(tiocm, (u32 __user *)arg);
if (ret)
return ret;
return mhi_dtr_tiocmset(mhi_cntrl, mhi_dev, tiocm);
}
default:
break;
}
return -ENOIOCTLCMD;
}
EXPORT_SYMBOL(mhi_ioctl);
static void mhi_dtr_dl_xfer_cb(struct mhi_device *mhi_dev,
struct mhi_result *mhi_result)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
struct dtr_ctrl_msg *dtr_msg = mhi_result->buf_addr;
u32 chan;
spinlock_t *res_lock;
if (mhi_result->bytes_xferd != sizeof(*dtr_msg)) {
MHI_ERR("Unexpected length %zu received\n",
mhi_result->bytes_xferd);
return;
}
MHI_VERB("preamble:0x%x msg_id:%u dest_id:%u msg:0x%x\n",
dtr_msg->preamble, dtr_msg->msg_id, dtr_msg->dest_id,
dtr_msg->msg);
chan = CTRL_GET_CHID(dtr_msg);
if (chan >= mhi_cntrl->max_chan)
return;
mhi_dev = mhi_cntrl->mhi_chan[chan].mhi_dev;
if (!mhi_dev)
return;
res_lock = &mhi_dev->dev.devres_lock;
spin_lock_irq(res_lock);
mhi_dev->tiocm &= ~(TIOCM_CD | TIOCM_DSR | TIOCM_RI);
if (dtr_msg->msg & CTRL_MSG_DCD)
mhi_dev->tiocm |= TIOCM_CD;
if (dtr_msg->msg & CTRL_MSG_DSR)
mhi_dev->tiocm |= TIOCM_DSR;
if (dtr_msg->msg & CTRL_MSG_RI)
mhi_dev->tiocm |= TIOCM_RI;
spin_unlock_irq(res_lock);
/* Notify the update */
mhi_notify(mhi_dev, MHI_CB_DTR_SIGNAL);
}
static void mhi_dtr_ul_xfer_cb(struct mhi_device *mhi_dev,
struct mhi_result *mhi_result)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
struct mhi_chan *dtr_chan = mhi_cntrl->dtr_dev->ul_chan;
MHI_VERB("Received with status:%d\n", mhi_result->transaction_status);
if (!mhi_result->transaction_status)
complete(&dtr_chan->completion);
}
static void mhi_dtr_remove(struct mhi_device *mhi_dev)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
mhi_cntrl->dtr_dev = NULL;
}
static int mhi_dtr_probe(struct mhi_device *mhi_dev,
const struct mhi_device_id *id)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
int ret;
MHI_LOG("Enter for DTR control channel\n");
ret = mhi_prepare_for_transfer(mhi_dev);
if (!ret)
mhi_cntrl->dtr_dev = mhi_dev;
MHI_LOG("Exit with ret:%d\n", ret);
return ret;
}
static const struct mhi_device_id mhi_dtr_table[] = {
{ .chan = "IP_CTRL" },
{},
};
static struct mhi_driver mhi_dtr_driver = {
.id_table = mhi_dtr_table,
.remove = mhi_dtr_remove,
.probe = mhi_dtr_probe,
.ul_xfer_cb = mhi_dtr_ul_xfer_cb,
.dl_xfer_cb = mhi_dtr_dl_xfer_cb,
.driver = {
.name = "MHI_DTR",
.owner = THIS_MODULE,
}
};
int __init mhi_dtr_init(void)
{
return mhi_driver_register(&mhi_dtr_driver);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,940 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved. */
#include <linux/msm_rtb.h>
#ifndef _MHI_INT_H
#define _MHI_INT_H
extern struct bus_type mhi_bus_type;
/* MHI mmio register mapping */
#define PCI_INVALID_READ(val) (val == U32_MAX)
#define MHI_REG_SIZE (SZ_4K)
#define MHIREGLEN (0x0)
#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
#define MHIREGLEN_MHIREGLEN_SHIFT (0)
#define MHIVER (0x8)
#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
#define MHIVER_MHIVER_SHIFT (0)
#define MHICFG (0x10)
#define MHICFG_NHWER_MASK (0xFF000000)
#define MHICFG_NHWER_SHIFT (24)
#define MHICFG_NER_MASK (0xFF0000)
#define MHICFG_NER_SHIFT (16)
#define MHICFG_NHWCH_MASK (0xFF00)
#define MHICFG_NHWCH_SHIFT (8)
#define MHICFG_NCH_MASK (0xFF)
#define MHICFG_NCH_SHIFT (0)
#define CHDBOFF (0x18)
#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
#define CHDBOFF_CHDBOFF_SHIFT (0)
#define ERDBOFF (0x20)
#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
#define ERDBOFF_ERDBOFF_SHIFT (0)
#define BHIOFF (0x28)
#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
#define BHIOFF_BHIOFF_SHIFT (0)
#define BHIEOFF (0x2C)
#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
#define BHIEOFF_BHIEOFF_SHIFT (0)
#define DEBUGOFF (0x30)
#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
#define DEBUGOFF_DEBUGOFF_SHIFT (0)
#define MHICTRL (0x38)
#define MHICTRL_MHISTATE_MASK (0x0000FF00)
#define MHICTRL_MHISTATE_SHIFT (8)
#define MHICTRL_RESET_MASK (0x2)
#define MHICTRL_RESET_SHIFT (1)
#define MHISTATUS (0x48)
#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
#define MHISTATUS_MHISTATE_SHIFT (8)
#define MHISTATUS_SYSERR_MASK (0x4)
#define MHISTATUS_SYSERR_SHIFT (2)
#define MHISTATUS_READY_MASK (0x1)
#define MHISTATUS_READY_SHIFT (0)
#define CCABAP_LOWER (0x58)
#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
#define CCABAP_HIGHER (0x5C)
#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
#define ECABAP_LOWER (0x60)
#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
#define ECABAP_HIGHER (0x64)
#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
#define CRCBAP_LOWER (0x68)
#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
#define CRCBAP_HIGHER (0x6C)
#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
#define CRDB_LOWER (0x70)
#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
#define CRDB_HIGHER (0x74)
#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
#define MHICTRLBASE_LOWER (0x80)
#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
#define MHICTRLBASE_HIGHER (0x84)
#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
#define MHICTRLLIMIT_LOWER (0x88)
#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
#define MHICTRLLIMIT_HIGHER (0x8C)
#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
#define MHIDATABASE_LOWER (0x98)
#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
#define MHIDATABASE_HIGHER (0x9C)
#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
#define MHIDATALIMIT_LOWER (0xA0)
#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
#define MHIDATALIMIT_HIGHER (0xA4)
#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
/* Host request register */
#define MHI_SOC_RESET_REQ_OFFSET (0xB0)
#define MHI_SOC_RESET_REQ BIT(0)
/* MHI misc capability registers */
#define MISC_OFFSET (0x24)
#define MISC_CAP_MASK (0xFFFFFFFF)
#define MISC_CAP_SHIFT (0)
#define CAP_CAPID_MASK (0xFF000000)
#define CAP_CAPID_SHIFT (24)
#define CAP_NEXT_CAP_MASK (0x00FFF000)
#define CAP_NEXT_CAP_SHIFT (12)
/* MHI Timesync offsets */
#define TIMESYNC_CFG_OFFSET (0x00)
#define TIMESYNC_CFG_CAPID_MASK (CAP_CAPID_MASK)
#define TIMESYNC_CFG_CAPID_SHIFT (CAP_CAPID_SHIFT)
#define TIMESYNC_CFG_NEXT_OFF_MASK (CAP_NEXT_CAP_MASK)
#define TIMESYNC_CFG_NEXT_OFF_SHIFT (CAP_NEXT_CAP_SHIFT)
#define TIMESYNC_CFG_NUMCMD_MASK (0xFF)
#define TIMESYNC_CFG_NUMCMD_SHIFT (0)
#define TIMESYNC_DB_OFFSET (0x4)
#define TIMESYNC_TIME_LOW_OFFSET (0x8)
#define TIMESYNC_TIME_HIGH_OFFSET (0xC)
#define TIMESYNC_CAP_ID (2)
/* MHI Bandwidth scaling offsets */
#define BW_SCALE_CFG_OFFSET (0x04)
#define BW_SCALE_CFG_CHAN_DB_ID_MASK (0xFE000000)
#define BW_SCALE_CFG_CHAN_DB_ID_SHIFT (25)
#define BW_SCALE_CFG_ENABLED_MASK (0x01000000)
#define BW_SCALE_CFG_ENABLED_SHIFT (24)
#define BW_SCALE_CFG_ER_ID_MASK (0x00F80000)
#define BW_SCALE_CFG_ER_ID_SHIFT (19)
#define BW_SCALE_CAP_ID (3)
/* MHI BHI offfsets */
#define BHI_BHIVERSION_MINOR (0x00)
#define BHI_BHIVERSION_MAJOR (0x04)
#define BHI_IMGADDR_LOW (0x08)
#define BHI_IMGADDR_HIGH (0x0C)
#define BHI_IMGSIZE (0x10)
#define BHI_RSVD1 (0x14)
#define BHI_IMGTXDB (0x18)
#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
#define BHI_TXDB_SEQNUM_SHFT (0)
#define BHI_RSVD2 (0x1C)
#define BHI_INTVEC (0x20)
#define BHI_RSVD3 (0x24)
#define BHI_EXECENV (0x28)
#define BHI_STATUS (0x2C)
#define BHI_ERRCODE (0x30)
#define BHI_ERRDBG1 (0x34)
#define BHI_ERRDBG2 (0x38)
#define BHI_ERRDBG3 (0x3C)
#define BHI_SERIALNU (0x40)
#define BHI_SBLANTIROLLVER (0x44)
#define BHI_NUMSEG (0x48)
#define BHI_MSMHWID(n) (0x4C + (0x4 * n))
#define BHI_OEMPKHASH(n) (0x64 + (0x4 * n))
#define BHI_RSVD5 (0xC4)
#define BHI_STATUS_MASK (0xC0000000)
#define BHI_STATUS_SHIFT (30)
#define BHI_STATUS_ERROR (3)
#define BHI_STATUS_SUCCESS (2)
#define BHI_STATUS_RESET (0)
/* MHI BHIE offsets */
#define BHIE_MSMSOCID_OFFS (0x0000)
#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
#define BHIE_TXVECSIZE_OFFS (0x0034)
#define BHIE_TXVECDB_OFFS (0x003C)
#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
#define BHIE_TXVECDB_SEQNUM_SHFT (0)
#define BHIE_TXVECSTATUS_OFFS (0x0044)
#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
#define BHIE_RXVECSIZE_OFFS (0x0068)
#define BHIE_RXVECDB_OFFS (0x0070)
#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
#define BHIE_RXVECDB_SEQNUM_SHFT (0)
#define BHIE_RXVECSTATUS_OFFS (0x0078)
#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
#define SOC_HW_VERSION_OFFS (0x224)
#define SOC_HW_VERSION_FAM_NUM_BMSK (0xF0000000)
#define SOC_HW_VERSION_FAM_NUM_SHFT (28)
#define SOC_HW_VERSION_DEV_NUM_BMSK (0x0FFF0000)
#define SOC_HW_VERSION_DEV_NUM_SHFT (16)
#define SOC_HW_VERSION_MAJOR_VER_BMSK (0x0000FF00)
#define SOC_HW_VERSION_MAJOR_VER_SHFT (8)
#define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
#define SOC_HW_VERSION_MINOR_VER_SHFT (0)
/* timesync time calculations */
#define LOCAL_TICKS_TO_US(x) (div_u64((x) * 100ULL, \
div_u64(mhi_cntrl->local_timer_freq, 10000ULL)))
#define REMOTE_TICKS_TO_US(x) (div_u64((x) * 100ULL, \
div_u64(mhi_cntrl->remote_timer_freq, 10000ULL)))
struct mhi_event_ctxt {
u32 reserved : 8;
u32 intmodc : 8;
u32 intmodt : 16;
u32 ertype;
u32 msivec;
u64 rbase __packed __aligned(4);
u64 rlen __packed __aligned(4);
u64 rp __packed __aligned(4);
u64 wp __packed __aligned(4);
};
struct mhi_chan_ctxt {
u32 chstate : 8;
u32 brstmode : 2;
u32 pollcfg : 6;
u32 reserved : 16;
u32 chtype;
u32 erindex;
u64 rbase __packed __aligned(4);
u64 rlen __packed __aligned(4);
u64 rp __packed __aligned(4);
u64 wp __packed __aligned(4);
};
struct mhi_cmd_ctxt {
u32 reserved0;
u32 reserved1;
u32 reserved2;
u64 rbase __packed __aligned(4);
u64 rlen __packed __aligned(4);
u64 rp __packed __aligned(4);
u64 wp __packed __aligned(4);
};
struct mhi_tre {
u64 ptr;
u32 dword[2];
};
struct bhi_vec_entry {
u64 dma_addr;
u64 size;
};
enum mhi_cmd_type {
MHI_CMD_TYPE_NOP = 1,
MHI_CMD_TYPE_RESET = 16,
MHI_CMD_TYPE_STOP = 17,
MHI_CMD_TYPE_START = 18,
MHI_CMD_TYPE_TSYNC = 24,
};
/* no operation command */
#define MHI_TRE_CMD_NOOP_PTR (0)
#define MHI_TRE_CMD_NOOP_DWORD0 (0)
#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_TYPE_NOP << 16)
/* channel reset command */
#define MHI_TRE_CMD_RESET_PTR (0)
#define MHI_TRE_CMD_RESET_DWORD0 (0)
#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
(MHI_CMD_TYPE_RESET << 16))
/* channel stop command */
#define MHI_TRE_CMD_STOP_PTR (0)
#define MHI_TRE_CMD_STOP_DWORD0 (0)
#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | (MHI_CMD_TYPE_STOP << 16))
/* channel start command */
#define MHI_TRE_CMD_START_PTR (0)
#define MHI_TRE_CMD_START_DWORD0 (0)
#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
(MHI_CMD_TYPE_START << 16))
/* time sync cfg command */
#define MHI_TRE_CMD_TSYNC_CFG_PTR (0)
#define MHI_TRE_CMD_TSYNC_CFG_DWORD0 (0)
#define MHI_TRE_CMD_TSYNC_CFG_DWORD1(er) ((MHI_CMD_TYPE_TSYNC << 16) | \
(er << 24))
#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
/* event descriptor macros */
#define MHI_TRE_EV_PTR(ptr) (ptr)
#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_TSYNC_SEQ(tre) ((tre)->dword[0])
#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
#define MHI_TRE_GET_EV_BW_REQ_SEQ(tre) (((tre)->dword[0] >> 8) & 0xFF)
/* transfer descriptor macros */
#define MHI_TRE_DATA_PTR(ptr) (ptr)
#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
| (ieot << 9) | (ieob << 8) | chain)
/* rsc transfer descriptor macros */
#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
enum MHI_CMD {
MHI_CMD_RESET_CHAN,
MHI_CMD_START_CHAN,
MHI_CMD_STOP_CHAN,
MHI_CMD_TIMSYNC_CFG,
};
enum MHI_PKT_TYPE {
MHI_PKT_TYPE_INVALID = 0x0,
MHI_PKT_TYPE_NOOP_CMD = 0x1,
MHI_PKT_TYPE_TRANSFER = 0x2,
MHI_PKT_TYPE_COALESCING = 0x8,
MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
MHI_PKT_TYPE_TX_EVENT = 0x22,
MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
MHI_PKT_TYPE_EE_EVENT = 0x40,
MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
MHI_PKT_TYPE_STALE_EVENT, /* internal event */
};
/* MHI transfer completion events */
enum MHI_EV_CCS {
MHI_EV_CC_INVALID = 0x0,
MHI_EV_CC_SUCCESS = 0x1,
MHI_EV_CC_EOT = 0x2,
MHI_EV_CC_OVERFLOW = 0x3,
MHI_EV_CC_EOB = 0x4,
MHI_EV_CC_OOB = 0x5,
MHI_EV_CC_DB_MODE = 0x6,
MHI_EV_CC_UNDEFINED_ERR = 0x10,
MHI_EV_CC_BAD_TRE = 0x11,
};
enum MHI_CH_STATE {
MHI_CH_STATE_DISABLED = 0x0,
MHI_CH_STATE_ENABLED = 0x1,
MHI_CH_STATE_RUNNING = 0x2,
MHI_CH_STATE_SUSPENDED = 0x3,
MHI_CH_STATE_STOP = 0x4,
MHI_CH_STATE_ERROR = 0x5,
};
enum MHI_BRSTMODE {
MHI_BRSTMODE_DISABLE = 0x2,
MHI_BRSTMODE_ENABLE = 0x3,
};
#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_BRSTMODE_DISABLE && \
mode != MHI_BRSTMODE_ENABLE)
extern const char * const mhi_ee_str[MHI_EE_MAX];
#define TO_MHI_EXEC_STR(ee) (((ee) >= MHI_EE_MAX) ? \
"INVALID_EE" : mhi_ee_str[ee])
#define MHI_IN_PBL(ee) (ee == MHI_EE_PBL || ee == MHI_EE_PTHRU || \
ee == MHI_EE_EDL)
#define MHI_IN_MISSION_MODE(ee) (ee == MHI_EE_AMSS || ee == MHI_EE_WFW)
enum MHI_ST_TRANSITION {
MHI_ST_TRANSITION_PBL,
MHI_ST_TRANSITION_READY,
MHI_ST_TRANSITION_SBL,
MHI_ST_TRANSITION_MISSION_MODE,
MHI_ST_TRANSITION_DISABLE,
MHI_ST_TRANSITION_MAX,
};
extern const char * const mhi_state_tran_str[MHI_ST_TRANSITION_MAX];
#define TO_MHI_STATE_TRANS_STR(state) (((state) >= MHI_ST_TRANSITION_MAX) ? \
"INVALID_STATE" : mhi_state_tran_str[state])
extern const char * const mhi_state_str[MHI_STATE_MAX];
#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
!mhi_state_str[state]) ? \
"INVALID_STATE" : mhi_state_str[state])
extern const char * const mhi_log_level_str[MHI_MSG_LVL_MAX];
#define TO_MHI_LOG_LEVEL_STR(level) ((level >= MHI_MSG_LVL_MAX || \
!mhi_log_level_str[level]) ? \
"Mask all" : mhi_log_level_str[level])
enum {
MHI_PM_BIT_DISABLE,
MHI_PM_BIT_POR,
MHI_PM_BIT_M0,
MHI_PM_BIT_M2,
MHI_PM_BIT_M3_ENTER,
MHI_PM_BIT_M3,
MHI_PM_BIT_M3_EXIT,
MHI_PM_BIT_FW_DL_ERR,
MHI_PM_BIT_DEVICE_ERR_DETECT,
MHI_PM_BIT_SYS_ERR_DETECT,
MHI_PM_BIT_SYS_ERR_PROCESS,
MHI_PM_BIT_SHUTDOWN_PROCESS,
MHI_PM_BIT_LD_ERR_FATAL_DETECT,
MHI_PM_BIT_SHUTDOWN_NO_ACCESS,
MHI_PM_BIT_MAX
};
/* internal power states */
enum MHI_PM_STATE {
MHI_PM_DISABLE = BIT(MHI_PM_BIT_DISABLE), /* MHI is not enabled */
MHI_PM_POR = BIT(MHI_PM_BIT_POR), /* reset state */
MHI_PM_M0 = BIT(MHI_PM_BIT_M0),
MHI_PM_M2 = BIT(MHI_PM_BIT_M2),
MHI_PM_M3_ENTER = BIT(MHI_PM_BIT_M3_ENTER),
MHI_PM_M3 = BIT(MHI_PM_BIT_M3),
MHI_PM_M3_EXIT = BIT(MHI_PM_BIT_M3_EXIT),
/* firmware download failure state */
MHI_PM_FW_DL_ERR = BIT(MHI_PM_BIT_FW_DL_ERR),
/* error or shutdown detected or processing state */
MHI_PM_DEVICE_ERR_DETECT = BIT(MHI_PM_BIT_DEVICE_ERR_DETECT),
MHI_PM_SYS_ERR_DETECT = BIT(MHI_PM_BIT_SYS_ERR_DETECT),
MHI_PM_SYS_ERR_PROCESS = BIT(MHI_PM_BIT_SYS_ERR_PROCESS),
MHI_PM_SHUTDOWN_PROCESS = BIT(MHI_PM_BIT_SHUTDOWN_PROCESS),
/* link not accessible */
MHI_PM_LD_ERR_FATAL_DETECT = BIT(MHI_PM_BIT_LD_ERR_FATAL_DETECT),
MHI_PM_SHUTDOWN_NO_ACCESS = BIT(MHI_PM_BIT_SHUTDOWN_NO_ACCESS),
};
#define MHI_REG_ACCESS_VALID(pm_state) ((pm_state & (MHI_PM_POR | MHI_PM_M0 | \
MHI_PM_M2 | MHI_PM_M3_ENTER | MHI_PM_M3_EXIT | \
MHI_PM_DEVICE_ERR_DETECT | MHI_PM_SYS_ERR_DETECT | \
MHI_PM_SYS_ERR_PROCESS | MHI_PM_SHUTDOWN_PROCESS | \
MHI_PM_FW_DL_ERR)))
#define MHI_PM_IN_ERROR_STATE(pm_state) (pm_state >= MHI_PM_FW_DL_ERR)
#define MHI_PM_IN_FATAL_STATE(pm_state) (pm_state >= MHI_PM_LD_ERR_FATAL_DETECT)
#define MHI_DB_ACCESS_VALID(mhi_cntrl) (mhi_cntrl->pm_state & \
mhi_cntrl->db_access)
#define MHI_WAKE_DB_CLEAR_VALID(pm_state) (pm_state & (MHI_PM_M0 | \
MHI_PM_M2 | MHI_PM_M3_EXIT))
#define MHI_WAKE_DB_SET_VALID(pm_state) (pm_state & MHI_PM_M2)
#define MHI_WAKE_DB_FORCE_SET_VALID(pm_state) MHI_WAKE_DB_CLEAR_VALID(pm_state)
#define MHI_EVENT_ACCESS_INVALID(pm_state) (pm_state == MHI_PM_DISABLE || \
MHI_PM_IN_ERROR_STATE(pm_state))
#define MHI_PM_IN_SUSPEND_STATE(pm_state) (pm_state & \
(MHI_PM_M3_ENTER | MHI_PM_M3))
/* accepted buffer type for the channel */
enum MHI_XFER_TYPE {
MHI_XFER_BUFFER,
MHI_XFER_SKB,
MHI_XFER_SCLIST,
MHI_XFER_NOP, /* CPU offload channel, host does not accept transfer */
MHI_XFER_DMA, /* receive dma address, already mapped by client */
MHI_XFER_RSC_DMA, /* RSC type, accept premapped buffer */
};
#define NR_OF_CMD_RINGS (1)
#define CMD_EL_PER_RING (128)
#define PRIMARY_CMD_RING (0)
#define MHI_BW_SCALE_CHAN_DB (126)
#define MHI_DEV_WAKE_DB (127)
#define MHI_MAX_MTU (0xffff)
#define MHI_BW_SCALE_SETUP(er_index) ((MHI_BW_SCALE_CHAN_DB << \
BW_SCALE_CFG_CHAN_DB_ID_SHIFT) & BW_SCALE_CFG_CHAN_DB_ID_MASK | \
(1 << BW_SCALE_CFG_ENABLED_SHIFT) & BW_SCALE_CFG_ENABLED_MASK | \
((er_index) << BW_SCALE_CFG_ER_ID_SHIFT) & BW_SCALE_CFG_ER_ID_MASK)
#define MHI_BW_SCALE_RESULT(status, seq) ((status & 0xF) << 8 | (seq & 0xFF))
#define MHI_BW_SCALE_NACK 0xF
enum MHI_ER_TYPE {
MHI_ER_TYPE_INVALID = 0x0,
MHI_ER_TYPE_VALID = 0x1,
};
enum mhi_er_priority {
MHI_ER_PRIORITY_HIGH,
MHI_ER_PRIORITY_MEDIUM,
MHI_ER_PRIORITY_LOW,
};
#define IS_MHI_ER_PRIORITY_LOW(ev) (ev->priority >= MHI_ER_PRIORITY_LOW)
#define IS_MHI_ER_PRIORITY_HIGH(ev) (ev->priority == MHI_ER_PRIORITY_HIGH)
enum mhi_er_data_type {
MHI_ER_DATA_ELEMENT_TYPE,
MHI_ER_CTRL_ELEMENT_TYPE,
MHI_ER_TSYNC_ELEMENT_TYPE,
MHI_ER_BW_SCALE_ELEMENT_TYPE,
MHI_ER_DATA_TYPE_MAX = MHI_ER_BW_SCALE_ELEMENT_TYPE,
};
enum mhi_ch_ee_mask {
MHI_CH_EE_PBL = BIT(MHI_EE_PBL),
MHI_CH_EE_SBL = BIT(MHI_EE_SBL),
MHI_CH_EE_AMSS = BIT(MHI_EE_AMSS),
MHI_CH_EE_RDDM = BIT(MHI_EE_RDDM),
MHI_CH_EE_PTHRU = BIT(MHI_EE_PTHRU),
MHI_CH_EE_WFW = BIT(MHI_EE_WFW),
MHI_CH_EE_EDL = BIT(MHI_EE_EDL),
};
enum mhi_ch_type {
MHI_CH_TYPE_INVALID = 0,
MHI_CH_TYPE_OUTBOUND = DMA_TO_DEVICE,
MHI_CH_TYPE_INBOUND = DMA_FROM_DEVICE,
MHI_CH_TYPE_INBOUND_COALESCED = 3,
};
struct db_cfg {
bool reset_req;
bool db_mode;
u32 pollcfg;
enum MHI_BRSTMODE brstmode;
dma_addr_t db_val;
void (*process_db)(struct mhi_controller *mhi_cntrl,
struct db_cfg *db_cfg, void __iomem *io_addr,
dma_addr_t db_val);
};
struct mhi_pm_transitions {
enum MHI_PM_STATE from_state;
u32 to_states;
};
struct state_transition {
struct list_head node;
enum MHI_ST_TRANSITION state;
enum MHI_PM_STATE pm_state;
};
struct mhi_ctxt {
struct mhi_event_ctxt *er_ctxt;
struct mhi_chan_ctxt *chan_ctxt;
struct mhi_cmd_ctxt *cmd_ctxt;
dma_addr_t er_ctxt_addr;
dma_addr_t chan_ctxt_addr;
dma_addr_t cmd_ctxt_addr;
};
struct mhi_ring {
dma_addr_t dma_handle;
dma_addr_t iommu_base;
u64 *ctxt_wp; /* point to ctxt wp */
void *pre_aligned;
void *base;
void *rp;
void *wp;
size_t el_size;
size_t len;
size_t elements;
size_t alloc_size;
void __iomem *db_addr;
};
struct mhi_cmd {
struct mhi_ring ring;
spinlock_t lock;
};
struct mhi_buf_info {
dma_addr_t p_addr;
void *v_addr;
void *bb_addr;
void *wp;
size_t len;
void *cb_buf;
bool used; /* indicate element is free to use */
bool pre_mapped; /* already pre-mapped by client */
enum dma_data_direction dir;
};
struct mhi_event {
struct list_head node;
u32 er_index;
u32 intmod;
u32 msi;
int chan; /* this event ring is dedicated to a channel */
enum mhi_er_priority priority;
enum mhi_er_data_type data_type;
struct mhi_ring ring;
struct db_cfg db_cfg;
bool hw_ring;
bool cl_manage;
bool offload_ev; /* managed by a device driver */
bool request_irq; /* has dedicated interrupt handler */
spinlock_t lock;
struct mhi_chan *mhi_chan; /* dedicated to channel */
struct tasklet_struct task;
int (*process_event)(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event,
u32 event_quota);
struct mhi_controller *mhi_cntrl;
};
struct mhi_chan {
u32 chan;
const char *name;
/*
* important, when consuming increment tre_ring first, when releasing
* decrement buf_ring first. If tre_ring has space, buf_ring
* guranteed to have space so we do not need to check both rings.
*/
struct mhi_ring buf_ring;
struct mhi_ring tre_ring;
u32 er_index;
enum mhi_ch_type type;
enum dma_data_direction dir;
struct db_cfg db_cfg;
u32 ee_mask;
enum MHI_XFER_TYPE xfer_type;
enum MHI_CH_STATE ch_state;
enum MHI_EV_CCS ccs;
bool bei; /* based on interrupt moderation, true if greater than 0 */
bool lpm_notify;
bool configured;
bool offload_ch;
bool pre_alloc;
bool auto_start;
bool wake_capable; /* channel should wake up system */
/* functions that generate the transfer ring elements */
int (*gen_tre)(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan, void *buf, void *cb,
size_t len, enum MHI_FLAGS flags);
int (*queue_xfer)(struct mhi_device *mhi_dev,
struct mhi_chan *mhi_chan, void *buf,
size_t len, enum MHI_FLAGS flags);
/* xfer call back */
struct mhi_device *mhi_dev;
void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *result);
struct mutex mutex;
struct completion completion;
rwlock_t lock;
struct list_head node;
};
struct tsync_node {
struct list_head node;
u32 sequence;
u64 local_time;
u64 remote_time;
struct mhi_device *mhi_dev;
void (*cb_func)(struct mhi_device *mhi_dev, u32 sequence,
u64 local_time, u64 remote_time);
};
struct mhi_timesync {
u32 er_index;
void __iomem *db;
void __iomem *time_reg;
enum MHI_EV_CCS ccs;
struct completion completion;
spinlock_t lock; /* list protection */
struct mutex lpm_mutex; /* lpm protection */
struct list_head head;
};
struct mhi_bus {
struct list_head controller_list;
struct mutex lock;
};
/* default MHI timeout */
#define MHI_TIMEOUT_MS (1000)
extern struct mhi_bus mhi_bus;
/* debug fs related functions */
int mhi_debugfs_mhi_chan_show(struct seq_file *m, void *d);
int mhi_debugfs_mhi_event_show(struct seq_file *m, void *d);
int mhi_debugfs_mhi_states_show(struct seq_file *m, void *d);
int mhi_debugfs_trigger_reset(void *data, u64 val);
void mhi_deinit_debugfs(struct mhi_controller *mhi_cntrl);
void mhi_init_debugfs(struct mhi_controller *mhi_cntrl);
/* power management apis */
enum MHI_PM_STATE __must_check mhi_tryset_pm_state(
struct mhi_controller *mhi_cntrl,
enum MHI_PM_STATE state);
const char *to_mhi_pm_state_str(enum MHI_PM_STATE state);
void mhi_reset_chan(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
enum mhi_ee mhi_get_exec_env(struct mhi_controller *mhi_cntrl);
int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
enum MHI_ST_TRANSITION state);
void mhi_pm_st_worker(struct work_struct *work);
void mhi_fw_load_worker(struct work_struct *work);
void mhi_process_sys_err(struct mhi_controller *mhi_cntrl);
void mhi_low_priority_worker(struct work_struct *work);
int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl);
void mhi_ctrl_ev_task(unsigned long data);
int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl);
void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl);
int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
void mhi_notify(struct mhi_device *mhi_dev, enum MHI_CB cb_reason);
int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event, u32 event_quota);
int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event, u32 event_quota);
int mhi_process_tsync_event_ring(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event, u32 event_quota);
int mhi_process_bw_scale_ev_ring(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event, u32 event_quota);
int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
enum MHI_CMD cmd);
int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl);
static inline void mhi_trigger_resume(struct mhi_controller *mhi_cntrl)
{
mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
pm_wakeup_hard_event(&mhi_cntrl->mhi_dev->dev);
}
/* queue transfer buffer */
int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
void *buf, void *cb, size_t buf_len, enum MHI_FLAGS flags);
int mhi_queue_buf(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
void *buf, size_t len, enum MHI_FLAGS mflags);
int mhi_queue_skb(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
void *buf, size_t len, enum MHI_FLAGS mflags);
int mhi_queue_sclist(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
void *buf, size_t len, enum MHI_FLAGS mflags);
int mhi_queue_nop(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
void *buf, size_t len, enum MHI_FLAGS mflags);
int mhi_queue_dma(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
void *buf, size_t len, enum MHI_FLAGS mflags);
/* register access methods */
void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
void __iomem *db_addr, dma_addr_t wp);
void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
struct db_cfg *db_mode, void __iomem *db_addr,
dma_addr_t wp);
int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 *out);
int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 mask,
u32 shift, u32 *out);
void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 val);
void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 mask, u32 shift, u32 val);
void mhi_ring_er_db(struct mhi_event *mhi_event);
void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
dma_addr_t wp);
void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd);
void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
int mhi_get_capability_offset(struct mhi_controller *mhi_cntrl, u32 capability,
u32 *offset);
void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr);
int mhi_init_timesync(struct mhi_controller *mhi_cntrl);
int mhi_create_timesync_sysfs(struct mhi_controller *mhi_cntrl);
void mhi_destroy_timesync(struct mhi_controller *mhi_cntrl);
int mhi_create_sysfs(struct mhi_controller *mhi_cntrl);
void mhi_destroy_sysfs(struct mhi_controller *mhi_cntrl);
int mhi_early_notify_device(struct device *dev, void *data);
/* timesync log support */
static inline void mhi_timesync_log(struct mhi_controller *mhi_cntrl)
{
struct mhi_timesync *mhi_tsync = mhi_cntrl->mhi_tsync;
if (mhi_tsync && mhi_cntrl->tsync_log)
mhi_cntrl->tsync_log(mhi_cntrl,
readq_no_log(mhi_tsync->time_reg));
}
/* memory allocation methods */
static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl,
size_t size,
dma_addr_t *dma_handle,
gfp_t gfp)
{
void *buf = dma_alloc_coherent(mhi_cntrl->dev, size, dma_handle, gfp);
if (buf)
atomic_add(size, &mhi_cntrl->alloc_size);
return buf;
}
static inline void mhi_free_coherent(struct mhi_controller *mhi_cntrl,
size_t size,
void *vaddr,
dma_addr_t dma_handle)
{
atomic_sub(size, &mhi_cntrl->alloc_size);
dma_free_coherent(mhi_cntrl->dev, size, vaddr, dma_handle);
}
static inline void *mhi_alloc_contig_coherent(
struct mhi_controller *mhi_cntrl,
size_t size, dma_addr_t *dma_handle,
gfp_t gfp)
{
void *buf = dma_alloc_attrs(mhi_cntrl->dev, size, dma_handle, gfp,
DMA_ATTR_FORCE_CONTIGUOUS);
if (buf)
atomic_add(size, &mhi_cntrl->alloc_size);
return buf;
}
static inline void mhi_free_contig_coherent(
struct mhi_controller *mhi_cntrl,
size_t size, void *vaddr,
dma_addr_t dma_handle)
{
atomic_sub(size, &mhi_cntrl->alloc_size);
dma_free_attrs(mhi_cntrl->dev, size, vaddr, dma_handle,
DMA_ATTR_FORCE_CONTIGUOUS);
}
struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
struct mhi_device *mhi_dev)
{
kfree(mhi_dev);
}
int mhi_destroy_device(struct device *dev, void *data);
void mhi_create_devices(struct mhi_controller *mhi_cntrl);
int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info **image_info, size_t alloc_size);
void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info *image_info);
int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info);
int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info);
void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info);
void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info);
/* initialization methods */
int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
int mhi_init_mmio(struct mhi_controller *mhi_cntrl);
int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl);
void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl);
int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
int mhi_dtr_init(void);
void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info);
int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
/* isr handlers */
irqreturn_t mhi_msi_handlr(int irq_number, void *dev);
irqreturn_t mhi_intvec_threaded_handlr(int irq_number, void *dev);
irqreturn_t mhi_intvec_handlr(int irq_number, void *dev);
void mhi_ev_task(unsigned long data);
#ifdef CONFIG_MHI_DEBUG
#define MHI_ASSERT(cond, msg) do { \
if (cond) \
panic(msg); \
} while (0)
#else
#define MHI_ASSERT(cond, msg) do { \
if (cond) { \
MHI_ERR(msg); \
WARN_ON(cond); \
} \
} while (0)
#endif
#endif /* _MHI_INT_H */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,35 @@
# SPDX-License-Identifier: GPL-2.0-only
menu "MHI device support"
config MHI_NETDEV
tristate "MHI NETDEV"
depends on MHI_BUS
help
MHI based net device driver for transferring IP traffic
between host and modem. By enabling this driver, clients
can transfer data using standard network interface. Over
the air traffic goes thru mhi netdev interface.
config MHI_UCI
tristate "MHI UCI"
depends on MHI_BUS
help
MHI based uci driver is for transferring data between host and
modem using standard file operations from user space. Open, read,
write, ioctl, and close operations are supported by this driver.
Please check mhi_uci_match_table for all supported channels that
are exposed to userspace.
config MHI_SATELLITE
tristate "MHI SATELLITE"
depends on MHI_BUS
help
MHI proxy satellite device driver enables NON-HLOS MHI satellite
drivers to communicate with device over PCIe link without host
involvement. Host facilitates propagation of events from device
to NON-HLOS MHI satellite drivers, channel states, and power
management over IPC communication. It helps in HLOS power
savings.
endmenu

View File

@ -0,0 +1,5 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-$(CONFIG_MHI_NETDEV) +=mhi_netdev.o
obj-$(CONFIG_MHI_UCI) +=mhi_uci.o
obj-$(CONFIG_MHI_SATELLITE) +=mhi_satellite.o

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,757 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.*/
#include <linux/cdev.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/ipc_logging.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/poll.h>
#include <linux/slab.h>
#include <linux/termios.h>
#include <linux/types.h>
#include <linux/wait.h>
#include <linux/uaccess.h>
#include <linux/mhi.h>
#define DEVICE_NAME "mhi"
#define MHI_UCI_DRIVER_NAME "mhi_uci"
struct uci_chan {
wait_queue_head_t wq;
spinlock_t lock;
struct list_head pending; /* user space waiting to read */
struct uci_buf *cur_buf; /* current buffer user space reading */
size_t rx_size;
};
struct uci_buf {
void *data;
size_t len;
struct list_head node;
};
struct uci_dev {
struct list_head node;
dev_t devt;
struct device *dev;
struct mhi_device *mhi_dev;
const char *chan;
struct mutex mutex; /* sync open and close */
struct uci_chan ul_chan;
struct uci_chan dl_chan;
size_t mtu;
size_t actual_mtu; /* maximum size of incoming buffer */
int ref_count;
bool enabled;
u32 tiocm;
void *ipc_log;
};
struct mhi_uci_drv {
struct list_head head;
struct mutex lock;
struct class *class;
int major;
dev_t dev_t;
};
static enum MHI_DEBUG_LEVEL msg_lvl = MHI_MSG_LVL_ERROR;
#ifdef CONFIG_MHI_DEBUG
#define IPC_LOG_LVL (MHI_MSG_LVL_VERBOSE)
#define MHI_UCI_IPC_LOG_PAGES (25)
#else
#define IPC_LOG_LVL (MHI_MSG_LVL_ERROR)
#define MHI_UCI_IPC_LOG_PAGES (1)
#endif
#ifdef CONFIG_MHI_DEBUG
#define MSG_VERB(fmt, ...) do { \
if (msg_lvl <= MHI_MSG_LVL_VERBOSE) \
printk("%s[D][%s] " fmt, KERN_ERR, __func__, \
##__VA_ARGS__); \
if (IPC_LOG_LVL <= MHI_MSG_LVL_VERBOSE) \
ipc_log_string(uci_dev->ipc_log, "%s[D][%s] " fmt, \
"", __func__, ##__VA_ARGS__); \
} while (0)
#else
#define MSG_VERB(fmt, ...)
#endif
#define MSG_LOG(fmt, ...) do { \
if (msg_lvl <= MHI_MSG_LVL_INFO) \
printk("%s[I][%s] " fmt, KERN_ERR, __func__, \
##__VA_ARGS__); \
if (IPC_LOG_LVL <= MHI_MSG_LVL_INFO) \
ipc_log_string(uci_dev->ipc_log, "%s[I][%s] " fmt, \
"", __func__, ##__VA_ARGS__); \
} while (0)
#define MSG_ERR(fmt, ...) do { \
if (msg_lvl <= MHI_MSG_LVL_ERROR) \
printk("%s[E][%s] " fmt, KERN_ERR, __func__, \
##__VA_ARGS__); \
if (IPC_LOG_LVL <= MHI_MSG_LVL_ERROR) \
ipc_log_string(uci_dev->ipc_log, "%s[E][%s] " fmt, \
"", __func__, ##__VA_ARGS__); \
} while (0)
#define MAX_UCI_DEVICES (64)
static DECLARE_BITMAP(uci_minors, MAX_UCI_DEVICES);
static struct mhi_uci_drv mhi_uci_drv;
static int mhi_queue_inbound(struct uci_dev *uci_dev)
{
struct mhi_device *mhi_dev = uci_dev->mhi_dev;
int nr_trbs = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
size_t mtu = uci_dev->mtu;
size_t actual_mtu = uci_dev->actual_mtu;
void *buf;
struct uci_buf *uci_buf;
int ret = -EIO, i;
for (i = 0; i < nr_trbs; i++) {
buf = kmalloc(mtu, GFP_KERNEL);
if (!buf)
return -ENOMEM;
uci_buf = buf + actual_mtu;
uci_buf->data = buf;
MSG_VERB("Allocated buf %d of %d size %ld\n", i, nr_trbs,
actual_mtu);
ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE, buf,
actual_mtu, MHI_EOT);
if (ret) {
kfree(buf);
MSG_ERR("Failed to queue buffer %d\n", i);
return ret;
}
}
return ret;
}
static long mhi_uci_ioctl(struct file *file,
unsigned int cmd,
unsigned long arg)
{
struct uci_dev *uci_dev = file->private_data;
struct mhi_device *mhi_dev = uci_dev->mhi_dev;
struct uci_chan *uci_chan = &uci_dev->dl_chan;
long ret = -ENOIOCTLCMD;
mutex_lock(&uci_dev->mutex);
if (cmd == TIOCMGET) {
spin_lock_bh(&uci_chan->lock);
ret = uci_dev->tiocm;
spin_unlock_bh(&uci_chan->lock);
} else if (uci_dev->enabled) {
ret = mhi_ioctl(mhi_dev, cmd, arg);
if (!ret) {
spin_lock_bh(&uci_chan->lock);
uci_dev->tiocm = mhi_dev->tiocm;
spin_unlock_bh(&uci_chan->lock);
}
}
mutex_unlock(&uci_dev->mutex);
return ret;
}
static int mhi_uci_release(struct inode *inode, struct file *file)
{
struct uci_dev *uci_dev = file->private_data;
mutex_lock(&uci_dev->mutex);
uci_dev->ref_count--;
if (!uci_dev->ref_count) {
struct uci_buf *itr, *tmp;
struct uci_chan *uci_chan;
MSG_LOG("Last client left, closing node\n");
if (uci_dev->enabled)
mhi_unprepare_from_transfer(uci_dev->mhi_dev);
/* clean inbound channel */
uci_chan = &uci_dev->dl_chan;
list_for_each_entry_safe(itr, tmp, &uci_chan->pending, node) {
list_del(&itr->node);
kfree(itr->data);
}
if (uci_chan->cur_buf)
kfree(uci_chan->cur_buf->data);
uci_chan->cur_buf = NULL;
if (!uci_dev->enabled) {
MSG_LOG("Node is deleted, freeing dev node\n");
mutex_unlock(&uci_dev->mutex);
mutex_destroy(&uci_dev->mutex);
clear_bit(MINOR(uci_dev->devt), uci_minors);
kfree(uci_dev);
return 0;
}
}
MSG_LOG("exit: ref_count:%d\n", uci_dev->ref_count);
mutex_unlock(&uci_dev->mutex);
return 0;
}
static __poll_t mhi_uci_poll(struct file *file, poll_table *wait)
{
struct uci_dev *uci_dev = file->private_data;
struct mhi_device *mhi_dev = uci_dev->mhi_dev;
struct uci_chan *uci_chan;
__poll_t mask = 0;
poll_wait(file, &uci_dev->dl_chan.wq, wait);
poll_wait(file, &uci_dev->ul_chan.wq, wait);
uci_chan = &uci_dev->dl_chan;
spin_lock_bh(&uci_chan->lock);
if (!uci_dev->enabled) {
mask = EPOLLERR;
} else {
if (!list_empty(&uci_chan->pending) || uci_chan->cur_buf) {
MSG_VERB("Client can read from node\n");
mask |= EPOLLIN | EPOLLRDNORM;
}
if (uci_dev->tiocm) {
MSG_VERB("Line status changed\n");
mask |= EPOLLPRI;
}
}
spin_unlock_bh(&uci_chan->lock);
uci_chan = &uci_dev->ul_chan;
spin_lock_bh(&uci_chan->lock);
if (!uci_dev->enabled) {
mask |= EPOLLERR;
} else if (mhi_get_no_free_descriptors(mhi_dev, DMA_TO_DEVICE) > 0) {
MSG_VERB("Client can write to node\n");
mask |= EPOLLOUT | EPOLLWRNORM;
}
spin_unlock_bh(&uci_chan->lock);
MSG_LOG("Client attempted to poll, returning mask 0x%x\n", mask);
return mask;
}
static ssize_t mhi_uci_write(struct file *file,
const char __user *buf,
size_t count,
loff_t *offp)
{
struct uci_dev *uci_dev = file->private_data;
struct mhi_device *mhi_dev = uci_dev->mhi_dev;
struct uci_chan *uci_chan = &uci_dev->ul_chan;
size_t bytes_xfered = 0;
int ret, nr_avail;
if (!buf || !count)
return -EINVAL;
/* confirm channel is active */
spin_lock_bh(&uci_chan->lock);
if (!uci_dev->enabled) {
spin_unlock_bh(&uci_chan->lock);
return -ERESTARTSYS;
}
MSG_VERB("Enter: to xfer:%lu bytes\n", count);
while (count) {
size_t xfer_size;
void *kbuf;
enum MHI_FLAGS flags;
spin_unlock_bh(&uci_chan->lock);
/* wait for free descriptors */
ret = wait_event_interruptible(uci_chan->wq,
(!uci_dev->enabled) ||
(nr_avail = mhi_get_no_free_descriptors(mhi_dev,
DMA_TO_DEVICE)) > 0);
if (ret == -ERESTARTSYS || !uci_dev->enabled) {
MSG_LOG("Exit signal caught for node or not enabled\n");
return -ERESTARTSYS;
}
xfer_size = min_t(size_t, count, uci_dev->mtu);
kbuf = kmalloc(xfer_size, GFP_KERNEL);
if (!kbuf) {
MSG_ERR("Failed to allocate memory %lu\n", xfer_size);
return -ENOMEM;
}
ret = copy_from_user(kbuf, buf, xfer_size);
if (unlikely(ret)) {
kfree(kbuf);
return ret;
}
spin_lock_bh(&uci_chan->lock);
/* if ring is full after this force EOT */
if (nr_avail > 1 && (count - xfer_size))
flags = MHI_CHAIN;
else
flags = MHI_EOT;
if (uci_dev->enabled)
ret = mhi_queue_transfer(mhi_dev, DMA_TO_DEVICE, kbuf,
xfer_size, flags);
else
ret = -ERESTARTSYS;
if (ret) {
kfree(kbuf);
goto sys_interrupt;
}
bytes_xfered += xfer_size;
count -= xfer_size;
buf += xfer_size;
}
spin_unlock_bh(&uci_chan->lock);
MSG_VERB("Exit: Number of bytes xferred:%lu\n", bytes_xfered);
return bytes_xfered;
sys_interrupt:
spin_unlock_bh(&uci_chan->lock);
return ret;
}
static ssize_t mhi_uci_read(struct file *file,
char __user *buf,
size_t count,
loff_t *ppos)
{
struct uci_dev *uci_dev = file->private_data;
struct mhi_device *mhi_dev = uci_dev->mhi_dev;
struct uci_chan *uci_chan = &uci_dev->dl_chan;
struct uci_buf *uci_buf;
char *ptr;
size_t to_copy;
int ret = 0;
if (!buf)
return -EINVAL;
MSG_VERB("Client provided buf len:%lu\n", count);
/* confirm channel is active */
spin_lock_bh(&uci_chan->lock);
if (!uci_dev->enabled) {
spin_unlock_bh(&uci_chan->lock);
return -ERESTARTSYS;
}
/* No data available to read, wait */
if (!uci_chan->cur_buf && list_empty(&uci_chan->pending)) {
MSG_VERB("No data available to read waiting\n");
spin_unlock_bh(&uci_chan->lock);
ret = wait_event_interruptible(uci_chan->wq,
(!uci_dev->enabled ||
!list_empty(&uci_chan->pending)));
if (ret == -ERESTARTSYS) {
MSG_LOG("Exit signal caught for node\n");
return -ERESTARTSYS;
}
spin_lock_bh(&uci_chan->lock);
if (!uci_dev->enabled) {
MSG_LOG("node is disabled\n");
ret = -ERESTARTSYS;
goto read_error;
}
}
/* new read, get the next descriptor from the list */
if (!uci_chan->cur_buf) {
uci_buf = list_first_entry_or_null(&uci_chan->pending,
struct uci_buf, node);
if (unlikely(!uci_buf)) {
ret = -EIO;
goto read_error;
}
list_del(&uci_buf->node);
uci_chan->cur_buf = uci_buf;
uci_chan->rx_size = uci_buf->len;
MSG_VERB("Got pkt of size:%zu\n", uci_chan->rx_size);
}
uci_buf = uci_chan->cur_buf;
spin_unlock_bh(&uci_chan->lock);
/* Copy the buffer to user space */
to_copy = min_t(size_t, count, uci_chan->rx_size);
ptr = uci_buf->data + (uci_buf->len - uci_chan->rx_size);
ret = copy_to_user(buf, ptr, to_copy);
if (ret)
return ret;
MSG_VERB("Copied %lu of %lu bytes\n", to_copy, uci_chan->rx_size);
uci_chan->rx_size -= to_copy;
/* we finished with this buffer, queue it back to hardware */
if (!uci_chan->rx_size) {
spin_lock_bh(&uci_chan->lock);
uci_chan->cur_buf = NULL;
if (uci_dev->enabled)
ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE,
uci_buf->data,
uci_dev->actual_mtu, MHI_EOT);
else
ret = -ERESTARTSYS;
if (ret) {
MSG_ERR("Failed to recycle element\n");
kfree(uci_buf->data);
goto read_error;
}
spin_unlock_bh(&uci_chan->lock);
}
MSG_VERB("Returning %lu bytes\n", to_copy);
return to_copy;
read_error:
spin_unlock_bh(&uci_chan->lock);
return ret;
}
static int mhi_uci_open(struct inode *inode, struct file *filp)
{
struct uci_dev *uci_dev = NULL, *tmp_dev;
int ret = -EIO;
struct uci_buf *buf_itr, *tmp;
struct uci_chan *dl_chan;
mutex_lock(&mhi_uci_drv.lock);
list_for_each_entry(tmp_dev, &mhi_uci_drv.head, node) {
if (tmp_dev->devt == inode->i_rdev) {
uci_dev = tmp_dev;
break;
}
}
/* could not find a minor node */
if (!uci_dev)
goto error_exit;
mutex_lock(&uci_dev->mutex);
if (!uci_dev->enabled) {
MSG_ERR("Node exist, but not in active state!\n");
goto error_open_chan;
}
uci_dev->ref_count++;
MSG_LOG("Node open, ref counts %u\n", uci_dev->ref_count);
if (uci_dev->ref_count == 1) {
MSG_LOG("Starting channel\n");
ret = mhi_prepare_for_transfer(uci_dev->mhi_dev);
if (ret) {
MSG_ERR("Error starting transfer channels\n");
uci_dev->ref_count--;
goto error_open_chan;
}
ret = mhi_queue_inbound(uci_dev);
if (ret)
goto error_rx_queue;
}
filp->private_data = uci_dev;
mutex_unlock(&uci_dev->mutex);
mutex_unlock(&mhi_uci_drv.lock);
return 0;
error_rx_queue:
dl_chan = &uci_dev->dl_chan;
mhi_unprepare_from_transfer(uci_dev->mhi_dev);
list_for_each_entry_safe(buf_itr, tmp, &dl_chan->pending, node) {
list_del(&buf_itr->node);
kfree(buf_itr->data);
}
error_open_chan:
mutex_unlock(&uci_dev->mutex);
error_exit:
mutex_unlock(&mhi_uci_drv.lock);
return ret;
}
static const struct file_operations mhidev_fops = {
.open = mhi_uci_open,
.release = mhi_uci_release,
.read = mhi_uci_read,
.write = mhi_uci_write,
.poll = mhi_uci_poll,
.unlocked_ioctl = mhi_uci_ioctl,
};
static void mhi_uci_remove(struct mhi_device *mhi_dev)
{
struct uci_dev *uci_dev = mhi_device_get_devdata(mhi_dev);
MSG_LOG("Enter\n");
mutex_lock(&mhi_uci_drv.lock);
mutex_lock(&uci_dev->mutex);
/* disable the node */
spin_lock_irq(&uci_dev->dl_chan.lock);
spin_lock_irq(&uci_dev->ul_chan.lock);
uci_dev->enabled = false;
spin_unlock_irq(&uci_dev->ul_chan.lock);
spin_unlock_irq(&uci_dev->dl_chan.lock);
wake_up(&uci_dev->dl_chan.wq);
wake_up(&uci_dev->ul_chan.wq);
/* delete the node to prevent new opens */
device_destroy(mhi_uci_drv.class, uci_dev->devt);
uci_dev->dev = NULL;
list_del(&uci_dev->node);
/* safe to free memory only if all file nodes are closed */
if (!uci_dev->ref_count) {
mutex_unlock(&uci_dev->mutex);
mutex_destroy(&uci_dev->mutex);
clear_bit(MINOR(uci_dev->devt), uci_minors);
kfree(uci_dev);
mutex_unlock(&mhi_uci_drv.lock);
return;
}
MSG_LOG("Exit\n");
mutex_unlock(&uci_dev->mutex);
mutex_unlock(&mhi_uci_drv.lock);
}
static int mhi_uci_probe(struct mhi_device *mhi_dev,
const struct mhi_device_id *id)
{
struct uci_dev *uci_dev;
int minor;
char node_name[32];
int dir;
uci_dev = kzalloc(sizeof(*uci_dev), GFP_KERNEL);
if (!uci_dev)
return -ENOMEM;
mutex_init(&uci_dev->mutex);
uci_dev->mhi_dev = mhi_dev;
minor = find_first_zero_bit(uci_minors, MAX_UCI_DEVICES);
if (minor >= MAX_UCI_DEVICES) {
kfree(uci_dev);
return -ENOSPC;
}
mutex_lock(&uci_dev->mutex);
mutex_lock(&mhi_uci_drv.lock);
uci_dev->devt = MKDEV(mhi_uci_drv.major, minor);
uci_dev->dev = device_create(mhi_uci_drv.class, &mhi_dev->dev,
uci_dev->devt, uci_dev,
DEVICE_NAME "_%04x_%02u.%02u.%02u%s%d",
mhi_dev->dev_id, mhi_dev->domain,
mhi_dev->bus, mhi_dev->slot, "_pipe_",
mhi_dev->ul_chan_id);
set_bit(minor, uci_minors);
/* create debugging buffer */
snprintf(node_name, sizeof(node_name), "mhi_uci_%04x_%02u.%02u.%02u_%d",
mhi_dev->dev_id, mhi_dev->domain, mhi_dev->bus, mhi_dev->slot,
mhi_dev->ul_chan_id);
uci_dev->ipc_log = ipc_log_context_create(MHI_UCI_IPC_LOG_PAGES,
node_name, 0);
for (dir = 0; dir < 2; dir++) {
struct uci_chan *uci_chan = (dir) ?
&uci_dev->ul_chan : &uci_dev->dl_chan;
spin_lock_init(&uci_chan->lock);
init_waitqueue_head(&uci_chan->wq);
INIT_LIST_HEAD(&uci_chan->pending);
}
uci_dev->mtu = min_t(size_t, id->driver_data, mhi_dev->mtu);
uci_dev->actual_mtu = uci_dev->mtu - sizeof(struct uci_buf);
mhi_device_set_devdata(mhi_dev, uci_dev);
uci_dev->enabled = true;
list_add(&uci_dev->node, &mhi_uci_drv.head);
mutex_unlock(&mhi_uci_drv.lock);
mutex_unlock(&uci_dev->mutex);
MSG_LOG("channel:%s successfully probed\n", mhi_dev->chan_name);
return 0;
};
static void mhi_ul_xfer_cb(struct mhi_device *mhi_dev,
struct mhi_result *mhi_result)
{
struct uci_dev *uci_dev = mhi_device_get_devdata(mhi_dev);
struct uci_chan *uci_chan = &uci_dev->ul_chan;
MSG_VERB("status:%d xfer_len:%zu\n", mhi_result->transaction_status,
mhi_result->bytes_xferd);
kfree(mhi_result->buf_addr);
if (!mhi_result->transaction_status)
wake_up(&uci_chan->wq);
}
static void mhi_dl_xfer_cb(struct mhi_device *mhi_dev,
struct mhi_result *mhi_result)
{
struct uci_dev *uci_dev = mhi_device_get_devdata(mhi_dev);
struct uci_chan *uci_chan = &uci_dev->dl_chan;
unsigned long flags;
struct uci_buf *buf;
MSG_VERB("status:%d receive_len:%zu\n", mhi_result->transaction_status,
mhi_result->bytes_xferd);
if (mhi_result->transaction_status == -ENOTCONN) {
kfree(mhi_result->buf_addr);
return;
}
spin_lock_irqsave(&uci_chan->lock, flags);
buf = mhi_result->buf_addr + uci_dev->actual_mtu;
buf->data = mhi_result->buf_addr;
buf->len = mhi_result->bytes_xferd;
list_add_tail(&buf->node, &uci_chan->pending);
spin_unlock_irqrestore(&uci_chan->lock, flags);
if (mhi_dev->dev.power.wakeup)
pm_wakeup_hard_event(&mhi_dev->dev);
wake_up(&uci_chan->wq);
}
static void mhi_status_cb(struct mhi_device *mhi_dev, enum MHI_CB reason)
{
struct uci_dev *uci_dev = mhi_device_get_devdata(mhi_dev);
struct uci_chan *uci_chan = &uci_dev->dl_chan;
unsigned long flags;
if (reason == MHI_CB_DTR_SIGNAL) {
spin_lock_irqsave(&uci_chan->lock, flags);
uci_dev->tiocm = mhi_dev->tiocm;
spin_unlock_irqrestore(&uci_chan->lock, flags);
wake_up(&uci_chan->wq);
}
}
/* .driver_data stores max mtu */
static const struct mhi_device_id mhi_uci_match_table[] = {
{ .chan = "LOOPBACK", .driver_data = 0x1000 },
{ .chan = "SAHARA", .driver_data = 0x8000 },
{ .chan = "EFS", .driver_data = 0x1000 },
{ .chan = "QMI0", .driver_data = 0x1000 },
{ .chan = "QMI1", .driver_data = 0x1000 },
{ .chan = "TF", .driver_data = 0x1000 },
{ .chan = "DUN", .driver_data = 0x1000 },
{},
};
static struct mhi_driver mhi_uci_driver = {
.id_table = mhi_uci_match_table,
.remove = mhi_uci_remove,
.probe = mhi_uci_probe,
.ul_xfer_cb = mhi_ul_xfer_cb,
.dl_xfer_cb = mhi_dl_xfer_cb,
.status_cb = mhi_status_cb,
.driver = {
.name = MHI_UCI_DRIVER_NAME,
.owner = THIS_MODULE,
},
};
static int mhi_uci_init(void)
{
int ret;
ret = register_chrdev(0, MHI_UCI_DRIVER_NAME, &mhidev_fops);
if (ret < 0)
return ret;
mhi_uci_drv.major = ret;
mhi_uci_drv.class = class_create(THIS_MODULE, MHI_UCI_DRIVER_NAME);
if (IS_ERR(mhi_uci_drv.class))
return -ENODEV;
mutex_init(&mhi_uci_drv.lock);
INIT_LIST_HEAD(&mhi_uci_drv.head);
ret = mhi_driver_register(&mhi_uci_driver);
if (ret)
class_destroy(mhi_uci_drv.class);
return ret;
}
module_init(mhi_uci_init);
static void __exit mhi_uci_exit(void)
{
if (mhi_uci_drv.major) {
unregister_chrdev_region(MKDEV(mhi_uci_drv.major, 0),
mhi_uci_drv.major);
mhi_uci_drv.major = 0;
}
class_destroy(mhi_uci_drv.class);
mhi_driver_unregister(&mhi_uci_driver);
}
module_exit(mhi_uci_exit);
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("MHI_UCI");
MODULE_DESCRIPTION("MHI UCI Driver");

901
include/linux/mhi.h Normal file
View File

@ -0,0 +1,901 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved. */
#ifndef _MHI_H_
#define _MHI_H_
struct mhi_chan;
struct mhi_event;
struct mhi_ctxt;
struct mhi_cmd;
struct image_info;
struct bhi_vec_entry;
struct mhi_timesync;
struct mhi_buf_info;
/**
* enum MHI_CB - MHI callback
* @MHI_CB_IDLE: MHI entered idle state
* @MHI_CB_PENDING_DATA: New data available for client to process
* @MHI_CB_DTR_SIGNAL: DTR signaling update
* @MHI_CB_LPM_ENTER: MHI host entered low power mode
* @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
* @MHI_CB_EE_RDDM: MHI device entered RDDM execution enviornment
* @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode ee
* @MHI_CB_SYS_ERROR: MHI device enter error state (may recover)
* @MHI_CB_FATAL_ERROR: MHI device entered fatal error
*/
enum MHI_CB {
MHI_CB_IDLE,
MHI_CB_PENDING_DATA,
MHI_CB_DTR_SIGNAL,
MHI_CB_LPM_ENTER,
MHI_CB_LPM_EXIT,
MHI_CB_EE_RDDM,
MHI_CB_EE_MISSION_MODE,
MHI_CB_SYS_ERROR,
MHI_CB_FATAL_ERROR,
};
/**
* enum MHI_DEBUG_LEVEL - various debugging level
*/
enum MHI_DEBUG_LEVEL {
MHI_MSG_LVL_VERBOSE,
MHI_MSG_LVL_INFO,
MHI_MSG_LVL_ERROR,
MHI_MSG_LVL_CRITICAL,
MHI_MSG_LVL_MASK_ALL,
MHI_MSG_LVL_MAX,
};
/**
* enum MHI_FLAGS - Transfer flags
* @MHI_EOB: End of buffer for bulk transfer
* @MHI_EOT: End of transfer
* @MHI_CHAIN: Linked transfer
*/
enum MHI_FLAGS {
MHI_EOB,
MHI_EOT,
MHI_CHAIN,
};
/**
* enum mhi_device_type - Device types
* @MHI_XFER_TYPE: Handles data transfer
* @MHI_TIMESYNC_TYPE: Use for timesync feature
* @MHI_CONTROLLER_TYPE: Control device
*/
enum mhi_device_type {
MHI_XFER_TYPE,
MHI_TIMESYNC_TYPE,
MHI_CONTROLLER_TYPE,
};
/**
* enum mhi_ee - device current execution enviornment
* @MHI_EE_PBL - device in PBL
* @MHI_EE_SBL - device in SBL
* @MHI_EE_AMSS - device in mission mode (firmware fully loaded)
* @MHI_EE_RDDM - device in ram dump collection mode
* @MHI_EE_WFW - device in WLAN firmware mode
* @MHI_EE_PTHRU - device in PBL but configured in pass thru mode
* @MHI_EE_EDL - device in emergency download mode
*/
enum mhi_ee {
MHI_EE_PBL,
MHI_EE_SBL,
MHI_EE_AMSS,
MHI_EE_RDDM,
MHI_EE_WFW,
MHI_EE_PTHRU,
MHI_EE_EDL,
MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
MHI_EE_NOT_SUPPORTED,
MHI_EE_MAX,
};
/**
* enum mhi_dev_state - device current MHI state
*/
enum mhi_dev_state {
MHI_STATE_RESET = 0x0,
MHI_STATE_READY = 0x1,
MHI_STATE_M0 = 0x2,
MHI_STATE_M1 = 0x3,
MHI_STATE_M2 = 0x4,
MHI_STATE_M3 = 0x5,
MHI_STATE_M3_FAST = 0x6,
MHI_STATE_BHI = 0x7,
MHI_STATE_SYS_ERR = 0xFF,
MHI_STATE_MAX,
};
#define MHI_VOTE_BUS BIT(0) /* do not disable the bus */
#define MHI_VOTE_DEVICE BIT(1) /* prevent mhi device from entering lpm */
/**
* struct mhi_link_info - bw requirement
* target_link_speed - as defined by TLS bits in LinkControl reg
* target_link_width - as defined by NLW bits in LinkStatus reg
* sequence_num - used by device to track bw requests sent to host
*/
struct mhi_link_info {
unsigned int target_link_speed;
unsigned int target_link_width;
int sequence_num;
};
/**
* struct image_info - firmware and rddm table table
* @mhi_buf - Contain device firmware and rddm table
* @entries - # of entries in table
*/
struct image_info {
struct mhi_buf *mhi_buf;
struct bhi_vec_entry *bhi_vec;
u32 entries;
};
/* rddm header info */
#define MAX_RDDM_TABLE_SIZE 6
/**
* struct rddm_table_info - rddm table info
* @base_address - Start offset of the file
* @actual_phys_address - phys addr offset of file
* @size - size of file
* @description - file description
* @file_name - name of file
*/
struct rddm_table_info {
u64 base_address;
u64 actual_phys_address;
u64 size;
char description[20];
char file_name[20];
};
/**
* struct rddm_header - rddm header
* @version - header ver
* @header_size - size of header
* @rddm_table_info - array of rddm table info
*/
struct rddm_header {
u32 version;
u32 header_size;
struct rddm_table_info table_info[MAX_RDDM_TABLE_SIZE];
};
/**
* struct file_info - keeping track of file info while traversing the rddm
* table header
* @file_offset - current file offset
* @seg_idx - mhi buf seg array index
* @rem_seg_len - remaining length of the segment containing current file
*/
struct file_info {
u8 *file_offset;
u32 file_size;
u32 seg_idx;
u32 rem_seg_len;
};
/**
* struct mhi_controller - Master controller structure for external modem
* @dev: Device associated with this controller
* @of_node: DT that has MHI configuration information
* @regs: Points to base of MHI MMIO register space
* @bhi: Points to base of MHI BHI register space
* @bhie: Points to base of MHI BHIe register space
* @wake_db: MHI WAKE doorbell register address
* @dev_id: PCIe device id of the external device
* @domain: PCIe domain the device connected to
* @bus: PCIe bus the device assigned to
* @slot: PCIe slot for the modem
* @iova_start: IOMMU starting address for data
* @iova_stop: IOMMU stop address for data
* @fw_image: Firmware image name for normal booting
* @edl_image: Firmware image name for emergency download mode
* @fbc_download: MHI host needs to do complete image transfer
* @rddm_size: RAM dump size that host should allocate for debugging purpose
* @sbl_size: SBL image size
* @seg_len: BHIe vector size
* @fbc_image: Points to firmware image buffer
* @rddm_image: Points to RAM dump buffer
* @max_chan: Maximum number of channels controller support
* @mhi_chan: Points to channel configuration table
* @lpm_chans: List of channels that require LPM notifications
* @total_ev_rings: Total # of event rings allocated
* @hw_ev_rings: Number of hardware event rings
* @sw_ev_rings: Number of software event rings
* @msi_required: Number of msi required to operate
* @msi_allocated: Number of msi allocated by bus master
* @irq: base irq # to request
* @mhi_event: MHI event ring configurations table
* @mhi_cmd: MHI command ring configurations table
* @mhi_ctxt: MHI device context, shared memory between host and device
* @timeout_ms: Timeout in ms for state transitions
* @pm_state: Power management state
* @ee: MHI device execution environment
* @dev_state: MHI STATE
* @mhi_link_info: requested link bandwidth by device
* @status_cb: CB function to notify various power states to but master
* @link_status: Query link status in case of abnormal value read from device
* @runtime_get: Async runtime resume function
* @runtimet_put: Release votes
* @time_get: Return host time in us
* @lpm_disable: Request controller to disable link level low power modes
* @lpm_enable: Controller may enable link level low power modes again
* @priv_data: Points to bus master's private data
*/
struct mhi_controller {
struct list_head node;
struct mhi_device *mhi_dev;
/* device node for iommu ops */
struct device *dev;
struct device_node *of_node;
/* mmio base */
phys_addr_t base_addr;
void __iomem *regs;
void __iomem *bhi;
void __iomem *bhie;
void __iomem *wake_db;
void __iomem *bw_scale_db;
/* device topology */
u32 dev_id;
u32 domain;
u32 bus;
u32 slot;
u32 family_number;
u32 device_number;
u32 major_version;
u32 minor_version;
/* addressing window */
dma_addr_t iova_start;
dma_addr_t iova_stop;
/* fw images */
const char *fw_image;
const char *edl_image;
/* mhi host manages downloading entire fbc images */
bool fbc_download;
size_t rddm_size;
size_t sbl_size;
size_t seg_len;
u32 session_id;
u32 sequence_id;
struct image_info *fbc_image;
struct image_info *rddm_image;
/* physical channel config data */
u32 max_chan;
struct mhi_chan *mhi_chan;
struct list_head lpm_chans; /* these chan require lpm notification */
/* physical event config data */
u32 total_ev_rings;
u32 hw_ev_rings;
u32 sw_ev_rings;
u32 msi_required;
u32 msi_allocated;
int *irq; /* interrupt table */
struct mhi_event *mhi_event;
struct list_head lp_ev_rings; /* low priority event rings */
/* cmd rings */
struct mhi_cmd *mhi_cmd;
/* mhi context (shared with device) */
struct mhi_ctxt *mhi_ctxt;
u32 timeout_ms;
u32 m2_timeout_ms; /* wait time for host to continue suspend after m2 */
/* caller should grab pm_mutex for suspend/resume operations */
struct mutex pm_mutex;
bool pre_init;
rwlock_t pm_lock;
u32 pm_state;
u32 saved_pm_state; /* saved state during fast suspend */
u32 db_access; /* db access only on these states */
enum mhi_ee ee;
u32 ee_table[MHI_EE_MAX]; /* ee conversion from dev to host */
enum mhi_dev_state dev_state;
enum mhi_dev_state saved_dev_state;
bool wake_set;
bool ignore_override;
atomic_t dev_wake;
atomic_t alloc_size;
atomic_t pending_pkts;
struct list_head transition_list;
spinlock_t transition_lock;
spinlock_t wlock;
/* target bandwidth info */
struct mhi_link_info mhi_link_info;
/* debug counters */
u32 M0, M2, M3, M3_FAST;
/* worker for different state transitions */
struct work_struct st_worker;
struct work_struct fw_worker;
struct work_struct low_priority_worker;
wait_queue_head_t state_event;
/* shadow functions */
void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
enum MHI_CB reason);
int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
void (*wake_toggle)(struct mhi_controller *mhi_cntrl);
int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
u64 (*time_get)(struct mhi_controller *mhi_cntrl, void *priv);
int (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
int (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
int (*map_single)(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf);
void (*unmap_single)(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf);
void (*tsync_log)(struct mhi_controller *mhi_cntrl, u64 remote_time);
int (*bw_scale)(struct mhi_controller *mhi_cntrl,
struct mhi_link_info *link_info);
/* channel to control DTR messaging */
struct mhi_device *dtr_dev;
/* bounce buffer settings */
bool bounce_buf;
size_t buffer_len;
/* supports time sync feature */
struct mhi_timesync *mhi_tsync;
struct mhi_device *tsync_dev;
u64 local_timer_freq;
u64 remote_timer_freq;
/* kernel log level */
enum MHI_DEBUG_LEVEL klog_lvl;
/* private log level controller driver to set */
enum MHI_DEBUG_LEVEL log_lvl;
/* controller specific data */
bool power_down;
void *priv_data;
void *log_buf;
struct dentry *dentry;
struct dentry *parent;
};
/**
* struct mhi_device - mhi device structure associated bind to channel
* @dev: Device associated with the channels
* @mtu: Maximum # of bytes controller support
* @ul_chan_id: MHI channel id for UL transfer
* @dl_chan_id: MHI channel id for DL transfer
* @tiocm: Device current terminal settings
* @early_notif: This device needs an early notification in case of error
* with external modem.
* @dev_vote: Keep external device in active state
* @bus_vote: Keep physical bus (pci, spi) in active state
* @priv: Driver private data
*/
struct mhi_device {
struct device dev;
u32 dev_id;
u32 domain;
u32 bus;
u32 slot;
size_t mtu;
int ul_chan_id;
int dl_chan_id;
int ul_event_id;
int dl_event_id;
u32 tiocm;
bool early_notif;
const struct mhi_device_id *id;
const char *chan_name;
struct mhi_controller *mhi_cntrl;
struct mhi_chan *ul_chan;
struct mhi_chan *dl_chan;
atomic_t dev_vote;
atomic_t bus_vote;
enum mhi_device_type dev_type;
void *priv_data;
int (*ul_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
void *buf, size_t len, enum MHI_FLAGS flags);
int (*dl_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
void *buf, size_t size, enum MHI_FLAGS flags);
void (*status_cb)(struct mhi_device *mhi_dev, enum MHI_CB reason);
};
/**
* struct mhi_result - Completed buffer information
* @buf_addr: Address of data buffer
* @dir: Channel direction
* @bytes_xfer: # of bytes transferred
* @transaction_status: Status of last trasnferred
*/
struct mhi_result {
void *buf_addr;
enum dma_data_direction dir;
size_t bytes_xferd;
int transaction_status;
};
/**
* struct mhi_buf - Describes the buffer
* @page: buffer as a page
* @buf: cpu address for the buffer
* @phys_addr: physical address of the buffer
* @dma_addr: iommu address for the buffer
* @skb: skb of ip packet
* @len: # of bytes
* @name: Buffer label, for offload channel configurations name must be:
* ECA - Event context array data
* CCA - Channel context array data
*/
struct mhi_buf {
struct list_head node;
struct page *page;
void *buf;
phys_addr_t phys_addr;
dma_addr_t dma_addr;
struct sk_buff *skb;
size_t len;
const char *name; /* ECA, CCA */
};
/**
* struct mhi_driver - mhi driver information
* @id_table: NULL terminated channel ID names
* @ul_xfer_cb: UL data transfer callback
* @dl_xfer_cb: DL data transfer callback
* @status_cb: Asynchronous status callback
*/
struct mhi_driver {
const struct mhi_device_id *id_table;
int (*probe)(struct mhi_device *mhi_dev,
const struct mhi_device_id *id);
void (*remove)(struct mhi_device *mhi_dev);
void (*ul_xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *res);
void (*dl_xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *res);
void (*status_cb)(struct mhi_device *mhi_dev, enum MHI_CB mhi_cb);
struct device_driver driver;
};
#define to_mhi_driver(drv) container_of(drv, struct mhi_driver, driver)
#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev)
static inline void mhi_device_set_devdata(struct mhi_device *mhi_dev,
void *priv)
{
mhi_dev->priv_data = priv;
}
static inline void *mhi_device_get_devdata(struct mhi_device *mhi_dev)
{
return mhi_dev->priv_data;
}
/**
* mhi_queue_transfer - Queue a buffer to hardware
* All transfers are asyncronous transfers
* @mhi_dev: Device associated with the channels
* @dir: Data direction
* @buf: Data buffer (skb for hardware channels)
* @len: Size in bytes
* @mflags: Interrupt flags for the device
*/
static inline int mhi_queue_transfer(struct mhi_device *mhi_dev,
enum dma_data_direction dir,
void *buf,
size_t len,
enum MHI_FLAGS mflags)
{
if (dir == DMA_TO_DEVICE)
return mhi_dev->ul_xfer(mhi_dev, mhi_dev->ul_chan, buf, len,
mflags);
else
return mhi_dev->dl_xfer(mhi_dev, mhi_dev->dl_chan, buf, len,
mflags);
}
static inline void *mhi_controller_get_devdata(struct mhi_controller *mhi_cntrl)
{
return mhi_cntrl->priv_data;
}
static inline void mhi_free_controller(struct mhi_controller *mhi_cntrl)
{
kfree(mhi_cntrl);
}
/**
* mhi_driver_register - Register driver with MHI framework
* @mhi_drv: mhi_driver structure
*/
int mhi_driver_register(struct mhi_driver *mhi_drv);
/**
* mhi_driver_unregister - Unregister a driver for mhi_devices
* @mhi_drv: mhi_driver structure
*/
void mhi_driver_unregister(struct mhi_driver *mhi_drv);
/**
* mhi_device_configure - configure ECA or CCA context
* For offload channels that client manage, call this
* function to configure channel context or event context
* array associated with the channel
* @mhi_div: Device associated with the channels
* @dir: Direction of the channel
* @mhi_buf: Configuration data
* @elements: # of configuration elements
*/
int mhi_device_configure(struct mhi_device *mhi_div,
enum dma_data_direction dir,
struct mhi_buf *mhi_buf,
int elements);
/**
* mhi_device_get - disable low power modes
* Only disables lpm, does not immediately exit low power mode
* if controller already in a low power mode
* @mhi_dev: Device associated with the channels
* @vote: requested vote (bus, device or both)
*/
void mhi_device_get(struct mhi_device *mhi_dev, int vote);
/**
* mhi_device_get_sync - disable low power modes
* Synchronously disable device & or bus low power, exit low power mode if
* controller already in a low power state
* @mhi_dev: Device associated with the channels
* @vote: requested vote (bus, device or both)
*/
int mhi_device_get_sync(struct mhi_device *mhi_dev, int vote);
/**
* mhi_device_put - re-enable low power modes
* @mhi_dev: Device associated with the channels
* @vote: vote to remove
*/
void mhi_device_put(struct mhi_device *mhi_dev, int vote);
/**
* mhi_prepare_for_transfer - setup channel for data transfer
* Moves both UL and DL channel from RESET to START state
* @mhi_dev: Device associated with the channels
*/
int mhi_prepare_for_transfer(struct mhi_device *mhi_dev);
/**
* mhi_unprepare_from_transfer -unprepare the channels
* Moves both UL and DL channels to RESET state
* @mhi_dev: Device associated with the channels
*/
void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev);
/**
* mhi_pause_transfer - Pause the current transfe
* Moves both UL and DL channels to STOP state to halt
* pending transfers.
* @mhi_dev: Device associated with the channels
*/
int mhi_pause_transfer(struct mhi_device *mhi_dev);
/**
* mhi_resume_transfer - resume current transfer
* Moves both UL and DL channels to START state to
* resume transfer.
* @mhi_dev: Device associated with the channels
*/
int mhi_resume_transfer(struct mhi_device *mhi_dev);
/**
* mhi_get_no_free_descriptors - Get transfer ring length
* Get # of TD available to queue buffers
* @mhi_dev: Device associated with the channels
* @dir: Direction of the channel
*/
int mhi_get_no_free_descriptors(struct mhi_device *mhi_dev,
enum dma_data_direction dir);
/**
* mhi_poll - poll for any available data to consume
* This is only applicable for DL direction
* @mhi_dev: Device associated with the channels
* @budget: In descriptors to service before returning
*/
int mhi_poll(struct mhi_device *mhi_dev, u32 budget);
/**
* mhi_ioctl - user space IOCTL support for MHI channels
* Native support for setting TIOCM
* @mhi_dev: Device associated with the channels
* @cmd: IOCTL cmd
* @arg: Optional parameter, iotcl cmd specific
*/
long mhi_ioctl(struct mhi_device *mhi_dev, unsigned int cmd, unsigned long arg);
/**
* mhi_alloc_controller - Allocate mhi_controller structure
* Allocate controller structure and additional data for controller
* private data. You may get the private data pointer by calling
* mhi_controller_get_devdata
* @size: # of additional bytes to allocate
*/
struct mhi_controller *mhi_alloc_controller(size_t size);
/**
* of_register_mhi_controller - Register MHI controller
* Registers MHI controller with MHI bus framework. DT must be supported
* @mhi_cntrl: MHI controller to register
*/
int of_register_mhi_controller(struct mhi_controller *mhi_cntrl);
void mhi_unregister_mhi_controller(struct mhi_controller *mhi_cntrl);
/**
* mhi_bdf_to_controller - Look up a registered controller
* Search for controller based on device identification
* @domain: RC domain of the device
* @bus: Bus device connected to
* @slot: Slot device assigned to
* @dev_id: Device Identification
*/
struct mhi_controller *mhi_bdf_to_controller(u32 domain, u32 bus, u32 slot,
u32 dev_id);
/**
* mhi_prepare_for_power_up - Do pre-initialization before power up
* This is optional, call this before power up if controller do not
* want bus framework to automatically free any allocated memory during shutdown
* process.
* @mhi_cntrl: MHI controller
*/
int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl);
/**
* mhi_async_power_up - Starts MHI power up sequence
* @mhi_cntrl: MHI controller
*/
int mhi_async_power_up(struct mhi_controller *mhi_cntrl);
int mhi_sync_power_up(struct mhi_controller *mhi_cntrl);
/**
* mhi_power_down - Start MHI power down sequence
* @mhi_cntrl: MHI controller
* @graceful: link is still accessible, do a graceful shutdown process otherwise
* we will shutdown host w/o putting device into RESET state
*/
void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful);
/**
* mhi_unprepare_after_powre_down - free any allocated memory for power up
* @mhi_cntrl: MHI controller
*/
void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl);
/**
* mhi_pm_suspend - Move MHI into a suspended state
* Transition to MHI state M3 state from M0||M1||M2 state
* @mhi_cntrl: MHI controller
*/
int mhi_pm_suspend(struct mhi_controller *mhi_cntrl);
/**
* mhi_pm_fast_suspend - Move host into suspend state while keeping
* the device in active state.
* @mhi_cntrl: MHI controller
* @notify_client: if true, clients will get a notification about lpm transition
*/
int mhi_pm_fast_suspend(struct mhi_controller *mhi_cntrl, bool notify_client);
/**
* mhi_pm_resume - Resume MHI from suspended state
* Transition to MHI state M0 state from M3 state
* @mhi_cntrl: MHI controller
*/
int mhi_pm_resume(struct mhi_controller *mhi_cntrl);
/**
* mhi_pm_fast_resume - Move host into resume state from fast suspend state
* @mhi_cntrl: MHI controller
* @notify_client: if true, clients will get a notification about lpm transition
*/
int mhi_pm_fast_resume(struct mhi_controller *mhi_cntrl, bool notify_client);
/**
* mhi_download_rddm_img - Download ramdump image from device for
* debugging purpose.
* @mhi_cntrl: MHI controller
* @in_panic: If we trying to capture image while in kernel panic
*/
int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic);
/**
* mhi_force_rddm_mode - Force external device into rddm mode
* to collect device ramdump. This is useful if host driver assert
* and we need to see device state as well.
* @mhi_cntrl: MHI controller
*/
int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl);
/**
* mhi_dump_sfr - Print SFR string from RDDM table.
* @mhi_cntrl: MHI controller
*/
void mhi_dump_sfr(struct mhi_controller *mhi_cntrl);
/**
* mhi_get_remote_time_sync - Get external soc time relative to local soc time
* using MMIO method.
* @mhi_dev: Device associated with the channels
* @t_host: Pointer to output local soc time
* @t_dev: Pointer to output remote soc time
*/
int mhi_get_remote_time_sync(struct mhi_device *mhi_dev,
u64 *t_host,
u64 *t_dev);
/**
* mhi_get_remote_time - Get external modem time relative to host time
* Trigger event to capture modem time, also capture host time so client
* can do a relative drift comparision.
* Recommended only tsync device calls this method and do not call this
* from atomic context
* @mhi_dev: Device associated with the channels
* @sequence:unique sequence id track event
* @cb_func: callback function to call back
*/
int mhi_get_remote_time(struct mhi_device *mhi_dev,
u32 sequence,
void (*cb_func)(struct mhi_device *mhi_dev,
u32 sequence,
u64 local_time,
u64 remote_time));
/**
* mhi_get_mhi_state - Return MHI state of device
* @mhi_cntrl: MHI controller
*/
enum mhi_dev_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl);
/**
* mhi_set_mhi_state - Set device state
* @mhi_cntrl: MHI controller
* @state: state to set
*/
void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl,
enum mhi_dev_state state);
/**
* mhi_is_active - helper function to determine if MHI in active state
* @mhi_dev: client device
*/
static inline bool mhi_is_active(struct mhi_device *mhi_dev)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
return (mhi_cntrl->dev_state >= MHI_STATE_M0 &&
mhi_cntrl->dev_state <= MHI_STATE_M3_FAST);
}
/**
* mhi_control_error - MHI controller went into unrecoverable error state.
* Will transition MHI into Linkdown state. Do not call from atomic
* context.
* @mhi_cntrl: MHI controller
*/
void mhi_control_error(struct mhi_controller *mhi_cntrl);
/**
* mhi_debug_reg_dump - dump MHI registers for debug purpose
* @mhi_cntrl: MHI controller
*/
void mhi_debug_reg_dump(struct mhi_controller *mhi_cntrl);
#ifndef CONFIG_ARCH_QCOM
#ifdef CONFIG_MHI_DEBUG
#define MHI_VERB(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_VERBOSE) \
pr_dbg("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
} while (0)
#else
#define MHI_VERB(fmt, ...)
#endif
#define MHI_LOG(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_INFO) \
printk("%s[I][%s] " fmt, KERN_INFO, __func__, \
##__VA_ARGS__); \
} while (0)
#define MHI_ERR(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_ERROR) \
printk("%s[E][%s] " fmt, KERN_ERR, __func__, \
##__VA_ARGS__); \
} while (0)
#define MHI_CRITICAL(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_CRITICAL) \
printk("%s[C][%s] " fmt, KERN_ALERT, __func__, \
##__VA_ARGS__); \
} while (0)
#else /* ARCH QCOM */
#include <linux/ipc_logging.h>
#ifdef CONFIG_MHI_DEBUG
#define MHI_VERB(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_VERBOSE) \
printk("%s[D][%s] " fmt, KERN_ERR, __func__, \
##__VA_ARGS__); \
if (mhi_cntrl->log_lvl <= MHI_MSG_LVL_VERBOSE) \
ipc_log_string(mhi_cntrl->log_buf, "%s[D][%s] " fmt, \
"", __func__, ##__VA_ARGS__); \
} while (0)
#else
#define MHI_VERB(fmt, ...) do { \
if (mhi_cntrl->log_lvl <= MHI_MSG_LVL_VERBOSE) \
ipc_log_string(mhi_cntrl->log_buf, "%s[D][%s] " fmt, \
"", __func__, ##__VA_ARGS__); \
} while (0)
#endif
#define MHI_LOG(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_INFO) \
printk("%s[I][%s] " fmt, KERN_ERR, __func__, \
##__VA_ARGS__); \
if (mhi_cntrl->log_lvl <= MHI_MSG_LVL_INFO) \
ipc_log_string(mhi_cntrl->log_buf, "%s[I][%s] " fmt, \
"", __func__, ##__VA_ARGS__); \
} while (0)
#define MHI_ERR(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_ERROR) \
printk("%s[E][%s] " fmt, KERN_ERR, __func__, \
##__VA_ARGS__); \
if (mhi_cntrl->log_lvl <= MHI_MSG_LVL_ERROR) \
ipc_log_string(mhi_cntrl->log_buf, "%s[E][%s] " fmt, \
"", __func__, ##__VA_ARGS__); \
} while (0)
#define MHI_CRITICAL(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_CRITICAL) \
printk("%s[C][%s] " fmt, KERN_ERR, __func__, \
##__VA_ARGS__); \
if (mhi_cntrl->log_lvl <= MHI_MSG_LVL_CRITICAL) \
ipc_log_string(mhi_cntrl->log_buf, "%s[C][%s] " fmt, \
"", __func__, ##__VA_ARGS__); \
} while (0)
#endif
#endif /* _MHI_H_ */

View File

@ -820,4 +820,17 @@ struct wmi_device_id {
const void *context;
};
#define MHI_NAME_SIZE 32
/**
* struct mhi_device_id - MHI device identification
* @chan: MHI channel name
* @driver_data: driver data;
*/
struct mhi_device_id {
const char chan[MHI_NAME_SIZE];
kernel_ulong_t driver_data;
};
#endif /* LINUX_MOD_DEVICETABLE_H */