Merge android12-5.10.19+ (0fc8633
) into msm-5.10
* refs/heads/tmp-0fc8633: FROMLIST: dt-bindings: usb: usb-xhci: add USB offload support FROMLIST: usb: xhci-plat: add xhci_plat_priv_overwrite FROMLIST: usb: host: export symbols for xhci hooks usage FROMLIST: usb: host: add xhci hooks for USB offload FROMLIST: BACKPORT: Kbuild: Support nested composite objects FROMGIT: Kbuild: Make composite object searching more generic Revert "ANDROID: kbuild: simplify cmd_mod" Revert "ANDROID: kbuild: handle excessively long argument lists" UPSTREAM: fs: anon_inodes: rephrase to appropriate kernel-doc FROMGIT: usb: dwc3: document usb_psy in struct dwc3 FROMGIT: usb: dwc3: Fix dereferencing of null dwc->usb_psy ANDROID: cgroup/cpuset: Fix suspicous RCU usage WARNING ANDROID: Adding kprobes build configs for Cuttlefish FROMLIST: firmware: arm_scmi: add dynamic scmi devices creation FROMLIST: firmware: arm_scmi: add protocol modularization support FROMLIST: firmware: arm_scmi: make notify_priv really private FROMLIST: firmware: arm_scmi: cleanup events registration transient code FROMLIST: firmware: arm_scmi: cleanup unused core xfer wrappers FROMLIST: firmware: arm_scmi: cleanup legacy protocol init code FROMLIST: firmware: arm_scmi: make references to handle const FROMLIST: firmware: arm_scmi: remove legacy scmi_voltage_ops protocol interface FROMLIST: regulator: scmi: port driver to the new scmi_voltage_proto_ops interface FROMLIST: firmware: arm_scmi: port Voltage protocol to new protocols interface FROMLIST: firmware: arm_scmi: port SystemPower protocol to new protocols interface FROMLIST: firmware: arm_scmi: remove legacy scmi_sensor_ops protocol interface FROMLIST: hwmon: (scmi) port driver to the new scmi_sensor_proto_ops interface FROMLIST: firmware: arm_scmi: port Sensor protocol to new protocols interface FROMLIST: firmware: arm_scmi: remove legacy scmi_reset_ops protocol interface FROMLIST: reset: reset-scmi: port driver to the new scmi_reset_proto_ops interface FROMLIST: firmware: arm_scmi: port Reset protocol to new protocols interface FROMLIST: firmware: arm_scmi: remove legacy scmi_clk_ops protocol interface FROMLIST: clk: scmi: port driver to the new scmi_clk_proto_ops interface FROMLIST: firmware: arm_scmi: port Clock protocol to new protocols interface FROMLIST: firmware: arm_scmi: remove legacy scmi_power_ops protocol interface FROMLIST: firmware: arm_scmi: port GenPD driver to the new scmi_power_proto_ops interface FROMLIST: firmware: arm_scmi: port Power protocol to new protocols interface FROMLIST: firmware: arm_scmi: remove legacy scmi_perf_ops protocol interface FROMLIST: cpufreq: scmi: port driver to the new scmi_perf_proto_ops interface FROMLIST: firmware: arm_scmi: port Perf protocol to new protocols interface FROMLIST: firmware: arm_scmi: port Base protocol to new interface FROMLIST: firmware: arm_scmi: add helper to access revision area memory FROMLIST: firmware: arm_scmi: add new protocol handle core xfer ops FROMLIST: firmware: arm_scmi: convert events registration to protocol handles FROMLIST: firmware: arm_scmi: refactor events registration FROMLIST: firmware: arm_scmi: introduce new devres notification ops FROMLIST: firmware: arm_scmi: make notifications aware of protocols users FROMLIST: firmware: arm_scmi: add devm_acquire_protocol helper FROMLIST: firmware: arm_scmi: introduce devres get/put protocols operations FROMLIST: firmware: arm_scmi: introduce protocol handle definitions FROMLIST: firmware: arm_scmi: review protocol registration interface UPSTREAM: firmware: arm_scmi: Fix call site of scmi_notification_exit UPSTREAM: MAINTAINERS: Update ARM SCMI entry UPSTREAM: firmware: arm_scmi: Augment SMC/HVC to allow optional interrupt UPSTREAM: dt-bindings: arm: Add optional interrupt to smc/hvc SCMI transport UPSTREAM: cpufreq: arm_scmi: Discover the power scale in performance protocol UPSTREAM: PM: EM: Add a flag indicating units of power values in Energy Model UPSTREAM: firmware: arm_scmi: Add power_scale_mw_get() interface UPSTREAM: arm64: defconfig: Enable ARM SCMI protocol and drivers UPSTREAM: regulator: add SCMI driver UPSTREAM: regulator: core: add of_match_full_name boolean flag UPSTREAM: dt-bindings: arm: remove optional properties for SCMI Regulators UPSTREAM: firmware: arm_scmi: Remove residual _le structs naming UPSTREAM: firmware: arm_scmi: Add SCMI v3.0 sensor notifications UPSTREAM: firmware: arm_scmi: Add SCMI v3.0 sensor configuration support UPSTREAM: firmware: arm_scmi: Add SCMI v3.0 sensors timestamped reads UPSTREAM: hwmon: (scmi) Update hwmon internal scale data type UPSTREAM: firmware: arm_scmi: Add support to enumerated SCMI voltage domain device UPSTREAM: firmware: arm_scmi: Add voltage domain management protocol support UPSTREAM: dt-bindings: arm: Add support for SCMI Regulators UPSTREAM: firmware: arm_scmi: Add SCMI v3.0 sensors descriptors extensions UPSTREAM: firmware: arm_scmi: Add full list of sensor type enumeration UPSTREAM: firmware: arm_scmi: Rework scmi_sensors_protocol_init ANDROID: GKI: Enable more networking configs ANDROID: clang: update to 12.0.3 ANDROID: GKI: amlogic: enable BCM WLAN as modules FROMGIT: usb: typec: tcpm: Wait for vbus discharge to VSAFE0V before toggling FROMGIT: usb: dwc3: add an alternate path in vbus_draw callback FROMGIT: usb: dwc3: add a power supply for current control Conflicts: Documentation/devicetree/bindings Documentation/devicetree/bindings/arm/arm,scmi.txt Documentation/devicetree/bindings/usb/usb-xhci.txt Change-Id: If4bdc6485dbf86d982bf273b3638dad10fb93b35 Signed-off-by: Ivaylo Georgiev <irgeorgiev@codeaurora.org>
This commit is contained in:
commit
56f04d7dca
@ -16969,6 +16969,7 @@ F: drivers/mfd/syscon.c
|
||||
|
||||
SYSTEM CONTROL & POWER/MANAGEMENT INTERFACE (SCPI/SCMI) Message Protocol drivers
|
||||
M: Sudeep Holla <sudeep.holla@arm.com>
|
||||
R: Cristian Marussi <cristian.marussi@arm.com>
|
||||
L: linux-arm-kernel@lists.infradead.org
|
||||
S: Maintained
|
||||
F: Documentation/devicetree/bindings/arm/arm,sc[mp]i.txt
|
||||
@ -16976,6 +16977,7 @@ F: drivers/clk/clk-sc[mp]i.c
|
||||
F: drivers/cpufreq/sc[mp]i-cpufreq.c
|
||||
F: drivers/firmware/arm_scmi/
|
||||
F: drivers/firmware/arm_scpi.c
|
||||
F: drivers/regulator/scmi-regulator.c
|
||||
F: drivers/reset/reset-scmi.c
|
||||
F: include/linux/sc[mp]i_protocol.h
|
||||
F: include/trace/events/scmi.h
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -15,6 +15,15 @@ CONFIG_REALTEK_PHY=m
|
||||
CONFIG_STMMAC_ETH=m
|
||||
CONFIG_STMMAC_PLATFORM=m
|
||||
|
||||
#
|
||||
# WLAN
|
||||
#
|
||||
CONFIG_WLAN_VENDOR_BROADCOM=y
|
||||
CONFIG_BRCMUTIL=m
|
||||
CONFIG_BRCMFMAC=m
|
||||
CONFIG_BRCMFMAC_PROTO_BCDC=y
|
||||
CONFIG_BRCMFMAC_SDIO=y
|
||||
|
||||
#
|
||||
# Amlogic
|
||||
#
|
||||
|
@ -93,8 +93,10 @@ CONFIG_ARM_IMX_CPUFREQ_DT=m
|
||||
CONFIG_ARM_QCOM_CPUFREQ_NVMEM=y
|
||||
CONFIG_ARM_QCOM_CPUFREQ_HW=y
|
||||
CONFIG_ARM_RASPBERRYPI_CPUFREQ=m
|
||||
CONFIG_ARM_SCMI_CPUFREQ=y
|
||||
CONFIG_ARM_TEGRA186_CPUFREQ=y
|
||||
CONFIG_QORIQ_CPUFREQ=y
|
||||
CONFIG_ARM_SCMI_PROTOCOL=y
|
||||
CONFIG_ARM_SCPI_PROTOCOL=y
|
||||
CONFIG_RASPBERRYPI_FIRMWARE=y
|
||||
CONFIG_INTEL_STRATIX10_SERVICE=y
|
||||
@ -522,6 +524,7 @@ CONFIG_POWER_RESET_SYSCON=y
|
||||
CONFIG_SYSCON_REBOOT_MODE=y
|
||||
CONFIG_BATTERY_SBS=m
|
||||
CONFIG_BATTERY_BQ27XXX=y
|
||||
CONFIG_SENSORS_ARM_SCMI=y
|
||||
CONFIG_SENSORS_ARM_SCPI=y
|
||||
CONFIG_SENSORS_LM90=m
|
||||
CONFIG_SENSORS_PWM_FAN=m
|
||||
@ -865,6 +868,7 @@ CONFIG_CROS_EC=y
|
||||
CONFIG_CROS_EC_I2C=y
|
||||
CONFIG_CROS_EC_SPI=y
|
||||
CONFIG_CROS_EC_CHARDEV=m
|
||||
CONFIG_COMMON_CLK_SCMI=y
|
||||
CONFIG_COMMON_CLK_RK808=y
|
||||
CONFIG_COMMON_CLK_SCPI=y
|
||||
CONFIG_COMMON_CLK_CS2000_CP=y
|
||||
|
@ -149,11 +149,12 @@ CONFIG_NF_CT_NETLINK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CT=y
|
||||
CONFIG_NETFILTER_XT_TARGET_DSCP=y
|
||||
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=y
|
||||
CONFIG_NETFILTER_XT_TARGET_MARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
|
||||
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
|
||||
CONFIG_NETFILTER_XT_TARGET_NOTRACK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_TEE=y
|
||||
CONFIG_NETFILTER_XT_TARGET_TPROXY=y
|
||||
CONFIG_NETFILTER_XT_TARGET_TRACE=y
|
||||
@ -164,6 +165,8 @@ CONFIG_NETFILTER_XT_MATCH_COMMENT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
|
||||
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
|
||||
CONFIG_NETFILTER_XT_MATCH_DSCP=y
|
||||
CONFIG_NETFILTER_XT_MATCH_ESP=y
|
||||
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_HELPER=y
|
||||
CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
|
||||
@ -171,6 +174,7 @@ CONFIG_NETFILTER_XT_MATCH_LENGTH=y
|
||||
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_MAC=y
|
||||
CONFIG_NETFILTER_XT_MATCH_MARK=y
|
||||
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_OWNER=y
|
||||
CONFIG_NETFILTER_XT_MATCH_POLICY=y
|
||||
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
|
||||
@ -213,7 +217,9 @@ CONFIG_IEEE802154_6LOWPAN=y
|
||||
CONFIG_MAC802154=y
|
||||
CONFIG_NET_SCHED=y
|
||||
CONFIG_NET_SCH_HTB=y
|
||||
CONFIG_NET_SCH_PRIO=y
|
||||
CONFIG_NET_SCH_INGRESS=y
|
||||
CONFIG_NET_CLS_FW=y
|
||||
CONFIG_NET_CLS_U32=y
|
||||
CONFIG_NET_CLS_BPF=y
|
||||
CONFIG_NET_EMATCH=y
|
||||
|
@ -131,11 +131,12 @@ CONFIG_NF_CT_NETLINK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_CT=y
|
||||
CONFIG_NETFILTER_XT_TARGET_DSCP=y
|
||||
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=y
|
||||
CONFIG_NETFILTER_XT_TARGET_MARK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
|
||||
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
|
||||
CONFIG_NETFILTER_XT_TARGET_NOTRACK=y
|
||||
CONFIG_NETFILTER_XT_TARGET_TEE=y
|
||||
CONFIG_NETFILTER_XT_TARGET_TPROXY=y
|
||||
CONFIG_NETFILTER_XT_TARGET_TRACE=y
|
||||
@ -146,6 +147,8 @@ CONFIG_NETFILTER_XT_MATCH_COMMENT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
|
||||
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
|
||||
CONFIG_NETFILTER_XT_MATCH_DSCP=y
|
||||
CONFIG_NETFILTER_XT_MATCH_ESP=y
|
||||
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_HELPER=y
|
||||
CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
|
||||
@ -153,6 +156,7 @@ CONFIG_NETFILTER_XT_MATCH_LENGTH=y
|
||||
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_MAC=y
|
||||
CONFIG_NETFILTER_XT_MATCH_MARK=y
|
||||
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_OWNER=y
|
||||
CONFIG_NETFILTER_XT_MATCH_POLICY=y
|
||||
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
|
||||
@ -195,7 +199,9 @@ CONFIG_IEEE802154_6LOWPAN=y
|
||||
CONFIG_MAC802154=y
|
||||
CONFIG_NET_SCHED=y
|
||||
CONFIG_NET_SCH_HTB=y
|
||||
CONFIG_NET_SCH_PRIO=y
|
||||
CONFIG_NET_SCH_INGRESS=y
|
||||
CONFIG_NET_CLS_FW=y
|
||||
CONFIG_NET_CLS_U32=y
|
||||
CONFIG_NET_CLS_BPF=y
|
||||
CONFIG_NET_EMATCH=y
|
||||
|
@ -4,7 +4,7 @@ KMI_GENERATION=0
|
||||
LLVM=1
|
||||
DEPMOD=depmod
|
||||
DTC=dtc
|
||||
CLANG_PREBUILT_BIN=prebuilts-master/clang/host/linux-x86/clang-r407598/bin
|
||||
CLANG_PREBUILT_BIN=prebuilts-master/clang/host/linux-x86/clang-r412851/bin
|
||||
BUILDTOOLS_PREBUILT_BIN=build/build-tools/path/linux-x86
|
||||
|
||||
STOP_SHIP_TRACEPRINTK=1
|
||||
|
20
build.config.gki_kprobes
Normal file
20
build.config.gki_kprobes
Normal file
@ -0,0 +1,20 @@
|
||||
DEFCONFIG=gki_defconfig
|
||||
POST_DEFCONFIG_CMDS="check_defconfig && update_kprobes_config"
|
||||
function update_kprobes_config() {
|
||||
${KERNEL_DIR}/scripts/config --file ${OUT_DIR}/.config \
|
||||
-d LTO \
|
||||
-d LTO_CLANG_THIN \
|
||||
-d CFI \
|
||||
-d CFI_PERMISSIVE \
|
||||
-d CFI_CLANG \
|
||||
-e CONFIG_DYNAMIC_FTRACE \
|
||||
-e CONFIG_FUNCTION_TRACER \
|
||||
-e CONFIG_IRQSOFF_TRACER \
|
||||
-e CONFIG_FUNCTION_PROFILER \
|
||||
-e CONFIG_PREEMPT_TRACER \
|
||||
-e CONFIG_CHECKPOINT_RESTORE \
|
||||
-d CONFIG_RANDOMIZE_BASE
|
||||
(cd ${OUT_DIR} && \
|
||||
make ${CC_LD_ARG} O=${OUT_DIR} olddefconfig)
|
||||
}
|
||||
|
4
build.config.gki_kprobes.aarch64
Normal file
4
build.config.gki_kprobes.aarch64
Normal file
@ -0,0 +1,4 @@
|
||||
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.common
|
||||
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.aarch64
|
||||
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.gki_kprobes
|
||||
|
4
build.config.gki_kprobes.x86_64
Normal file
4
build.config.gki_kprobes.x86_64
Normal file
@ -0,0 +1,4 @@
|
||||
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.common
|
||||
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.x86_64
|
||||
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.gki_kprobes
|
||||
|
@ -2,7 +2,7 @@
|
||||
/*
|
||||
* System Control and Power Interface (SCMI) Protocol based clock driver
|
||||
*
|
||||
* Copyright (C) 2018 ARM Ltd.
|
||||
* Copyright (C) 2018-2020 ARM Ltd.
|
||||
*/
|
||||
|
||||
#include <linux/clk-provider.h>
|
||||
@ -13,11 +13,13 @@
|
||||
#include <linux/scmi_protocol.h>
|
||||
#include <asm/div64.h>
|
||||
|
||||
static const struct scmi_clk_proto_ops *clk_ops;
|
||||
|
||||
struct scmi_clk {
|
||||
u32 id;
|
||||
struct clk_hw hw;
|
||||
const struct scmi_clock_info *info;
|
||||
const struct scmi_handle *handle;
|
||||
const struct scmi_protocol_handle *ph;
|
||||
};
|
||||
|
||||
#define to_scmi_clk(clk) container_of(clk, struct scmi_clk, hw)
|
||||
@ -29,7 +31,7 @@ static unsigned long scmi_clk_recalc_rate(struct clk_hw *hw,
|
||||
u64 rate;
|
||||
struct scmi_clk *clk = to_scmi_clk(hw);
|
||||
|
||||
ret = clk->handle->clk_ops->rate_get(clk->handle, clk->id, &rate);
|
||||
ret = clk_ops->rate_get(clk->ph, clk->id, &rate);
|
||||
if (ret)
|
||||
return 0;
|
||||
return rate;
|
||||
@ -69,21 +71,21 @@ static int scmi_clk_set_rate(struct clk_hw *hw, unsigned long rate,
|
||||
{
|
||||
struct scmi_clk *clk = to_scmi_clk(hw);
|
||||
|
||||
return clk->handle->clk_ops->rate_set(clk->handle, clk->id, rate);
|
||||
return clk_ops->rate_set(clk->ph, clk->id, rate);
|
||||
}
|
||||
|
||||
static int scmi_clk_enable(struct clk_hw *hw)
|
||||
{
|
||||
struct scmi_clk *clk = to_scmi_clk(hw);
|
||||
|
||||
return clk->handle->clk_ops->enable(clk->handle, clk->id);
|
||||
return clk_ops->enable(clk->ph, clk->id);
|
||||
}
|
||||
|
||||
static void scmi_clk_disable(struct clk_hw *hw)
|
||||
{
|
||||
struct scmi_clk *clk = to_scmi_clk(hw);
|
||||
|
||||
clk->handle->clk_ops->disable(clk->handle, clk->id);
|
||||
clk_ops->disable(clk->ph, clk->id);
|
||||
}
|
||||
|
||||
static const struct clk_ops scmi_clk_ops = {
|
||||
@ -142,11 +144,16 @@ static int scmi_clocks_probe(struct scmi_device *sdev)
|
||||
struct device *dev = &sdev->dev;
|
||||
struct device_node *np = dev->of_node;
|
||||
const struct scmi_handle *handle = sdev->handle;
|
||||
struct scmi_protocol_handle *ph;
|
||||
|
||||
if (!handle || !handle->clk_ops)
|
||||
if (!handle)
|
||||
return -ENODEV;
|
||||
|
||||
count = handle->clk_ops->count_get(handle);
|
||||
clk_ops = handle->devm_get_protocol(sdev, SCMI_PROTOCOL_CLOCK, &ph);
|
||||
if (IS_ERR(clk_ops))
|
||||
return PTR_ERR(clk_ops);
|
||||
|
||||
count = clk_ops->count_get(ph);
|
||||
if (count < 0) {
|
||||
dev_err(dev, "%pOFn: invalid clock output count\n", np);
|
||||
return -EINVAL;
|
||||
@ -167,14 +174,14 @@ static int scmi_clocks_probe(struct scmi_device *sdev)
|
||||
if (!sclk)
|
||||
return -ENOMEM;
|
||||
|
||||
sclk->info = handle->clk_ops->info_get(handle, idx);
|
||||
sclk->info = clk_ops->info_get(ph, idx);
|
||||
if (!sclk->info) {
|
||||
dev_dbg(dev, "invalid clock info for idx %d\n", idx);
|
||||
continue;
|
||||
}
|
||||
|
||||
sclk->id = idx;
|
||||
sclk->handle = handle;
|
||||
sclk->ph = ph;
|
||||
|
||||
err = scmi_clk_ops_init(dev, sclk);
|
||||
if (err) {
|
||||
|
@ -25,17 +25,17 @@ struct scmi_data {
|
||||
struct device *cpu_dev;
|
||||
};
|
||||
|
||||
static const struct scmi_handle *handle;
|
||||
static struct scmi_protocol_handle *ph;
|
||||
static const struct scmi_perf_proto_ops *perf_ops;
|
||||
|
||||
static unsigned int scmi_cpufreq_get_rate(unsigned int cpu)
|
||||
{
|
||||
struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);
|
||||
const struct scmi_perf_ops *perf_ops = handle->perf_ops;
|
||||
struct scmi_data *priv = policy->driver_data;
|
||||
unsigned long rate;
|
||||
int ret;
|
||||
|
||||
ret = perf_ops->freq_get(handle, priv->domain_id, &rate, false);
|
||||
ret = perf_ops->freq_get(ph, priv->domain_id, &rate, false);
|
||||
if (ret)
|
||||
return 0;
|
||||
return rate / 1000;
|
||||
@ -50,19 +50,17 @@ static int
|
||||
scmi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
{
|
||||
struct scmi_data *priv = policy->driver_data;
|
||||
const struct scmi_perf_ops *perf_ops = handle->perf_ops;
|
||||
u64 freq = policy->freq_table[index].frequency;
|
||||
|
||||
return perf_ops->freq_set(handle, priv->domain_id, freq * 1000, false);
|
||||
return perf_ops->freq_set(ph, priv->domain_id, freq * 1000, false);
|
||||
}
|
||||
|
||||
static unsigned int scmi_cpufreq_fast_switch(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
struct scmi_data *priv = policy->driver_data;
|
||||
const struct scmi_perf_ops *perf_ops = handle->perf_ops;
|
||||
|
||||
if (!perf_ops->freq_set(handle, priv->domain_id,
|
||||
if (!perf_ops->freq_set(ph, priv->domain_id,
|
||||
target_freq * 1000, true))
|
||||
return target_freq;
|
||||
|
||||
@ -75,7 +73,7 @@ scmi_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
|
||||
int cpu, domain, tdomain;
|
||||
struct device *tcpu_dev;
|
||||
|
||||
domain = handle->perf_ops->device_domain_id(cpu_dev);
|
||||
domain = perf_ops->device_domain_id(cpu_dev);
|
||||
if (domain < 0)
|
||||
return domain;
|
||||
|
||||
@ -87,7 +85,7 @@ scmi_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
|
||||
if (!tcpu_dev)
|
||||
continue;
|
||||
|
||||
tdomain = handle->perf_ops->device_domain_id(tcpu_dev);
|
||||
tdomain = perf_ops->device_domain_id(tcpu_dev);
|
||||
if (tdomain == domain)
|
||||
cpumask_set_cpu(cpu, cpumask);
|
||||
}
|
||||
@ -102,13 +100,13 @@ scmi_get_cpu_power(unsigned long *power, unsigned long *KHz,
|
||||
unsigned long Hz;
|
||||
int ret, domain;
|
||||
|
||||
domain = handle->perf_ops->device_domain_id(cpu_dev);
|
||||
domain = perf_ops->device_domain_id(cpu_dev);
|
||||
if (domain < 0)
|
||||
return domain;
|
||||
|
||||
/* Get the power cost of the performance domain. */
|
||||
Hz = *KHz * 1000;
|
||||
ret = handle->perf_ops->est_power_get(handle, domain, &Hz, power);
|
||||
ret = perf_ops->est_power_get(ph, domain, &Hz, power);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -126,6 +124,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
|
||||
struct scmi_data *priv;
|
||||
struct cpufreq_frequency_table *freq_table;
|
||||
struct em_data_callback em_cb = EM_DATA_CB(scmi_get_cpu_power);
|
||||
bool power_scale_mw;
|
||||
|
||||
cpu_dev = get_cpu_device(policy->cpu);
|
||||
if (!cpu_dev) {
|
||||
@ -133,7 +132,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
ret = handle->perf_ops->device_opps_add(handle, cpu_dev);
|
||||
ret = perf_ops->device_opps_add(ph, cpu_dev);
|
||||
if (ret) {
|
||||
dev_warn(cpu_dev, "failed to add opps to the device\n");
|
||||
return ret;
|
||||
@ -172,7 +171,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
|
||||
}
|
||||
|
||||
priv->cpu_dev = cpu_dev;
|
||||
priv->domain_id = handle->perf_ops->device_domain_id(cpu_dev);
|
||||
priv->domain_id = perf_ops->device_domain_id(cpu_dev);
|
||||
|
||||
policy->driver_data = priv;
|
||||
policy->freq_table = freq_table;
|
||||
@ -180,16 +179,18 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
|
||||
/* SCMI allows DVFS request for any domain from any CPU */
|
||||
policy->dvfs_possible_from_any_cpu = true;
|
||||
|
||||
latency = handle->perf_ops->transition_latency_get(handle, cpu_dev);
|
||||
latency = perf_ops->transition_latency_get(ph, cpu_dev);
|
||||
if (!latency)
|
||||
latency = CPUFREQ_ETERNAL;
|
||||
|
||||
policy->cpuinfo.transition_latency = latency;
|
||||
|
||||
policy->fast_switch_possible =
|
||||
handle->perf_ops->fast_switch_possible(handle, cpu_dev);
|
||||
perf_ops->fast_switch_possible(ph, cpu_dev);
|
||||
|
||||
em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus);
|
||||
power_scale_mw = perf_ops->power_scale_mw_get(ph);
|
||||
em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus,
|
||||
power_scale_mw);
|
||||
|
||||
return 0;
|
||||
|
||||
@ -230,12 +231,17 @@ static int scmi_cpufreq_probe(struct scmi_device *sdev)
|
||||
{
|
||||
int ret;
|
||||
struct device *dev = &sdev->dev;
|
||||
const struct scmi_handle *handle;
|
||||
|
||||
handle = sdev->handle;
|
||||
|
||||
if (!handle || !handle->perf_ops)
|
||||
if (!handle)
|
||||
return -ENODEV;
|
||||
|
||||
perf_ops = handle->devm_get_protocol(sdev, SCMI_PROTOCOL_PERF, &ph);
|
||||
if (IS_ERR(perf_ops))
|
||||
return PTR_ERR(perf_ops);
|
||||
|
||||
#ifdef CONFIG_COMMON_CLK
|
||||
/* dummy clock provider as needed by OPP if clocks property is used */
|
||||
if (of_find_property(dev->of_node, "#clock-cells", NULL))
|
||||
|
@ -4,7 +4,7 @@ scmi-driver-y = driver.o notify.o
|
||||
scmi-transport-y = shmem.o
|
||||
scmi-transport-$(CONFIG_MAILBOX) += mailbox.o
|
||||
scmi-transport-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smc.o
|
||||
scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o
|
||||
scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o
|
||||
scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \
|
||||
$(scmi-transport-y)
|
||||
obj-$(CONFIG_ARM_SCMI_PROTOCOL) += scmi-module.o
|
||||
|
@ -7,6 +7,7 @@
|
||||
|
||||
#define pr_fmt(fmt) "SCMI Notifications BASE - " fmt
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/scmi_protocol.h>
|
||||
|
||||
#include "common.h"
|
||||
@ -50,30 +51,30 @@ struct scmi_base_error_notify_payld {
|
||||
* scmi_base_attributes_get() - gets the implementation details
|
||||
* that are associated with the base protocol.
|
||||
*
|
||||
* @handle: SCMI entity handle
|
||||
* @ph: SCMI protocol handle
|
||||
*
|
||||
* Return: 0 on success, else appropriate SCMI error.
|
||||
*/
|
||||
static int scmi_base_attributes_get(const struct scmi_handle *handle)
|
||||
static int scmi_base_attributes_get(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_msg_resp_base_attributes *attr_info;
|
||||
struct scmi_revision_info *rev = handle->version;
|
||||
struct scmi_revision_info *rev = ph->get_priv(ph);
|
||||
|
||||
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
|
||||
SCMI_PROTOCOL_BASE, 0, sizeof(*attr_info), &t);
|
||||
ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES,
|
||||
0, sizeof(*attr_info), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret) {
|
||||
attr_info = t->rx.buf;
|
||||
rev->num_protocols = attr_info->num_protocols;
|
||||
rev->num_agents = attr_info->num_agents;
|
||||
}
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -81,19 +82,20 @@ static int scmi_base_attributes_get(const struct scmi_handle *handle)
|
||||
/**
|
||||
* scmi_base_vendor_id_get() - gets vendor/subvendor identifier ASCII string.
|
||||
*
|
||||
* @handle: SCMI entity handle
|
||||
* @ph: SCMI protocol handle
|
||||
* @sub_vendor: specify true if sub-vendor ID is needed
|
||||
*
|
||||
* Return: 0 on success, else appropriate SCMI error.
|
||||
*/
|
||||
static int
|
||||
scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor)
|
||||
scmi_base_vendor_id_get(const struct scmi_protocol_handle *ph, bool sub_vendor)
|
||||
{
|
||||
u8 cmd;
|
||||
int ret, size;
|
||||
char *vendor_id;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_revision_info *rev = handle->version;
|
||||
struct scmi_revision_info *rev = ph->get_priv(ph);
|
||||
|
||||
|
||||
if (sub_vendor) {
|
||||
cmd = BASE_DISCOVER_SUB_VENDOR;
|
||||
@ -105,15 +107,15 @@ scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor)
|
||||
size = ARRAY_SIZE(rev->vendor_id);
|
||||
}
|
||||
|
||||
ret = scmi_xfer_get_init(handle, cmd, SCMI_PROTOCOL_BASE, 0, size, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, cmd, 0, size, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret)
|
||||
memcpy(vendor_id, t->rx.buf, size);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -123,30 +125,30 @@ scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor)
|
||||
* implementation 32-bit version. The format of the version number is
|
||||
* vendor-specific
|
||||
*
|
||||
* @handle: SCMI entity handle
|
||||
* @ph: SCMI protocol handle
|
||||
*
|
||||
* Return: 0 on success, else appropriate SCMI error.
|
||||
*/
|
||||
static int
|
||||
scmi_base_implementation_version_get(const struct scmi_handle *handle)
|
||||
scmi_base_implementation_version_get(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
int ret;
|
||||
__le32 *impl_ver;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_revision_info *rev = handle->version;
|
||||
struct scmi_revision_info *rev = ph->get_priv(ph);
|
||||
|
||||
ret = scmi_xfer_get_init(handle, BASE_DISCOVER_IMPLEMENT_VERSION,
|
||||
SCMI_PROTOCOL_BASE, 0, sizeof(*impl_ver), &t);
|
||||
ret = ph->xops->xfer_get_init(ph, BASE_DISCOVER_IMPLEMENT_VERSION,
|
||||
0, sizeof(*impl_ver), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret) {
|
||||
impl_ver = t->rx.buf;
|
||||
rev->impl_ver = le32_to_cpu(*impl_ver);
|
||||
}
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -155,12 +157,13 @@ scmi_base_implementation_version_get(const struct scmi_handle *handle)
|
||||
* scmi_base_implementation_list_get() - gets the list of protocols it is
|
||||
* OSPM is allowed to access
|
||||
*
|
||||
* @handle: SCMI entity handle
|
||||
* @ph: SCMI protocol handle
|
||||
* @protocols_imp: pointer to hold the list of protocol identifiers
|
||||
*
|
||||
* Return: 0 on success, else appropriate SCMI error.
|
||||
*/
|
||||
static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
|
||||
static int
|
||||
scmi_base_implementation_list_get(const struct scmi_protocol_handle *ph,
|
||||
u8 *protocols_imp)
|
||||
{
|
||||
u8 *list;
|
||||
@ -168,10 +171,10 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
|
||||
struct scmi_xfer *t;
|
||||
__le32 *num_skip, *num_ret;
|
||||
u32 tot_num_ret = 0, loop_num_ret;
|
||||
struct device *dev = handle->dev;
|
||||
struct device *dev = ph->dev;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, BASE_DISCOVER_LIST_PROTOCOLS,
|
||||
SCMI_PROTOCOL_BASE, sizeof(*num_skip), 0, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, BASE_DISCOVER_LIST_PROTOCOLS,
|
||||
sizeof(*num_skip), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -183,7 +186,7 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
|
||||
/* Set the number of protocols to be skipped/already read */
|
||||
*num_skip = cpu_to_le32(tot_num_ret);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (ret)
|
||||
break;
|
||||
|
||||
@ -198,10 +201,10 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
|
||||
|
||||
tot_num_ret += loop_num_ret;
|
||||
|
||||
scmi_reset_rx_to_maxsz(handle, t);
|
||||
ph->xops->reset_rx_to_maxsz(ph, t);
|
||||
} while (loop_num_ret);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -209,7 +212,7 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
|
||||
/**
|
||||
* scmi_base_discover_agent_get() - discover the name of an agent
|
||||
*
|
||||
* @handle: SCMI entity handle
|
||||
* @ph: SCMI protocol handle
|
||||
* @id: Agent identifier
|
||||
* @name: Agent identifier ASCII string
|
||||
*
|
||||
@ -218,63 +221,63 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
|
||||
*
|
||||
* Return: 0 on success, else appropriate SCMI error.
|
||||
*/
|
||||
static int scmi_base_discover_agent_get(const struct scmi_handle *handle,
|
||||
static int scmi_base_discover_agent_get(const struct scmi_protocol_handle *ph,
|
||||
int id, char *name)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, BASE_DISCOVER_AGENT,
|
||||
SCMI_PROTOCOL_BASE, sizeof(__le32),
|
||||
SCMI_MAX_STR_SIZE, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, BASE_DISCOVER_AGENT,
|
||||
sizeof(__le32), SCMI_MAX_STR_SIZE, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
put_unaligned_le32(id, t->tx.buf);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret)
|
||||
strlcpy(name, t->rx.buf, SCMI_MAX_STR_SIZE);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_base_error_notify(const struct scmi_handle *handle, bool enable)
|
||||
static int scmi_base_error_notify(const struct scmi_protocol_handle *ph,
|
||||
bool enable)
|
||||
{
|
||||
int ret;
|
||||
u32 evt_cntl = enable ? BASE_TP_NOTIFY_ALL : 0;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_msg_base_error_notify *cfg;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, BASE_NOTIFY_ERRORS,
|
||||
SCMI_PROTOCOL_BASE, sizeof(*cfg), 0, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, BASE_NOTIFY_ERRORS,
|
||||
sizeof(*cfg), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
cfg = t->tx.buf;
|
||||
cfg->event_control = cpu_to_le32(evt_cntl);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_base_set_notify_enabled(const struct scmi_handle *handle,
|
||||
static int scmi_base_set_notify_enabled(const struct scmi_protocol_handle *ph,
|
||||
u8 evt_id, u32 src_id, bool enable)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = scmi_base_error_notify(handle, enable);
|
||||
ret = scmi_base_error_notify(ph, enable);
|
||||
if (ret)
|
||||
pr_debug("FAIL_ENABLED - evt[%X] ret:%d\n", evt_id, ret);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void *scmi_base_fill_custom_report(const struct scmi_handle *handle,
|
||||
static void *scmi_base_fill_custom_report(const struct scmi_protocol_handle *ph,
|
||||
u8 evt_id, ktime_t timestamp,
|
||||
const void *payld, size_t payld_sz,
|
||||
void *report, u32 *src_id)
|
||||
@ -318,17 +321,24 @@ static const struct scmi_event_ops base_event_ops = {
|
||||
.fill_custom_report = scmi_base_fill_custom_report,
|
||||
};
|
||||
|
||||
int scmi_base_protocol_init(struct scmi_handle *h)
|
||||
static const struct scmi_protocol_events base_protocol_events = {
|
||||
.queue_sz = 4 * SCMI_PROTO_QUEUE_SZ,
|
||||
.ops = &base_event_ops,
|
||||
.evts = base_events,
|
||||
.num_events = ARRAY_SIZE(base_events),
|
||||
.num_sources = SCMI_BASE_NUM_SOURCES,
|
||||
};
|
||||
|
||||
static int scmi_base_protocol_init(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
int id, ret;
|
||||
u8 *prot_imp;
|
||||
u32 version;
|
||||
char name[SCMI_MAX_STR_SIZE];
|
||||
const struct scmi_handle *handle = h;
|
||||
struct device *dev = handle->dev;
|
||||
struct scmi_revision_info *rev = handle->version;
|
||||
struct device *dev = ph->dev;
|
||||
struct scmi_revision_info *rev = scmi_get_revision_area(ph);
|
||||
|
||||
ret = scmi_version_get(handle, SCMI_PROTOCOL_BASE, &version);
|
||||
ret = ph->xops->version_get(ph, &version);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -338,13 +348,15 @@ int scmi_base_protocol_init(struct scmi_handle *h)
|
||||
|
||||
rev->major_ver = PROTOCOL_REV_MAJOR(version),
|
||||
rev->minor_ver = PROTOCOL_REV_MINOR(version);
|
||||
ph->set_priv(ph, rev);
|
||||
|
||||
scmi_base_attributes_get(handle);
|
||||
scmi_base_vendor_id_get(handle, false);
|
||||
scmi_base_vendor_id_get(handle, true);
|
||||
scmi_base_implementation_version_get(handle);
|
||||
scmi_base_implementation_list_get(handle, prot_imp);
|
||||
scmi_setup_protocol_implemented(handle, prot_imp);
|
||||
scmi_base_attributes_get(ph);
|
||||
scmi_base_vendor_id_get(ph, false);
|
||||
scmi_base_vendor_id_get(ph, true);
|
||||
scmi_base_implementation_version_get(ph);
|
||||
scmi_base_implementation_list_get(ph, prot_imp);
|
||||
|
||||
scmi_setup_protocol_implemented(ph, prot_imp);
|
||||
|
||||
dev_info(dev, "SCMI Protocol v%d.%d '%s:%s' Firmware version 0x%x\n",
|
||||
rev->major_ver, rev->minor_ver, rev->vendor_id,
|
||||
@ -352,16 +364,20 @@ int scmi_base_protocol_init(struct scmi_handle *h)
|
||||
dev_dbg(dev, "Found %d protocol(s) %d agent(s)\n", rev->num_protocols,
|
||||
rev->num_agents);
|
||||
|
||||
scmi_register_protocol_events(handle, SCMI_PROTOCOL_BASE,
|
||||
(4 * SCMI_PROTO_QUEUE_SZ),
|
||||
&base_event_ops, base_events,
|
||||
ARRAY_SIZE(base_events),
|
||||
SCMI_BASE_NUM_SOURCES);
|
||||
|
||||
for (id = 0; id < rev->num_agents; id++) {
|
||||
scmi_base_discover_agent_get(handle, id, name);
|
||||
scmi_base_discover_agent_get(ph, id, name);
|
||||
dev_dbg(dev, "Agent %d: %s\n", id, name);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct scmi_protocol scmi_base = {
|
||||
.id = SCMI_PROTOCOL_BASE,
|
||||
.owner = NULL,
|
||||
.init_instance = &scmi_base_protocol_init,
|
||||
.ops = NULL,
|
||||
.events = &base_protocol_events,
|
||||
};
|
||||
|
||||
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(base, scmi_base)
|
||||
|
@ -16,7 +16,7 @@
|
||||
#include "common.h"
|
||||
|
||||
static DEFINE_IDA(scmi_bus_id);
|
||||
static DEFINE_IDR(scmi_protocols);
|
||||
static DEFINE_IDR(scmi_available_protocols);
|
||||
static DEFINE_SPINLOCK(protocol_lock);
|
||||
|
||||
static const struct scmi_device_id *
|
||||
@ -51,18 +51,53 @@ static int scmi_dev_match(struct device *dev, struct device_driver *drv)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int scmi_protocol_init(int protocol_id, struct scmi_handle *handle)
|
||||
static int scmi_match_by_id_table(struct device *dev, void *data)
|
||||
{
|
||||
scmi_prot_init_fn_t fn = idr_find(&scmi_protocols, protocol_id);
|
||||
struct scmi_device *sdev = to_scmi_dev(dev);
|
||||
struct scmi_device_id *id_table = data;
|
||||
|
||||
if (unlikely(!fn))
|
||||
return -EINVAL;
|
||||
return fn(handle);
|
||||
return sdev->protocol_id == id_table->protocol_id &&
|
||||
!strcmp(sdev->name, id_table->name);
|
||||
}
|
||||
|
||||
static int scmi_protocol_dummy_init(struct scmi_handle *handle)
|
||||
struct scmi_device *scmi_find_child_dev(struct device *parent,
|
||||
int prot_id, const char *name)
|
||||
{
|
||||
return 0;
|
||||
struct scmi_device_id id_table;
|
||||
struct device *dev;
|
||||
|
||||
id_table.protocol_id = prot_id;
|
||||
id_table.name = name;
|
||||
|
||||
dev = device_find_child(parent, &id_table, scmi_match_by_id_table);
|
||||
if (!dev)
|
||||
return NULL;
|
||||
|
||||
return to_scmi_dev(dev);
|
||||
}
|
||||
|
||||
const struct scmi_protocol *scmi_get_protocol(int protocol_id)
|
||||
{
|
||||
const struct scmi_protocol *proto;
|
||||
|
||||
proto = idr_find(&scmi_available_protocols, protocol_id);
|
||||
if (!proto || !try_module_get(proto->owner)) {
|
||||
pr_warn("SCMI Protocol 0x%x not found!\n", protocol_id);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
pr_debug("GOT SCMI Protocol 0x%x\n", protocol_id);
|
||||
|
||||
return proto;
|
||||
}
|
||||
|
||||
void scmi_put_protocol(int protocol_id)
|
||||
{
|
||||
const struct scmi_protocol *proto;
|
||||
|
||||
proto = idr_find(&scmi_available_protocols, protocol_id);
|
||||
if (proto)
|
||||
module_put(proto->owner);
|
||||
}
|
||||
|
||||
static int scmi_dev_probe(struct device *dev)
|
||||
@ -70,7 +105,6 @@ static int scmi_dev_probe(struct device *dev)
|
||||
struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver);
|
||||
struct scmi_device *scmi_dev = to_scmi_dev(dev);
|
||||
const struct scmi_device_id *id;
|
||||
int ret;
|
||||
|
||||
id = scmi_dev_match_id(scmi_dev, scmi_drv);
|
||||
if (!id)
|
||||
@ -79,14 +113,6 @@ static int scmi_dev_probe(struct device *dev)
|
||||
if (!scmi_dev->handle)
|
||||
return -EPROBE_DEFER;
|
||||
|
||||
ret = scmi_protocol_init(scmi_dev->protocol_id, scmi_dev->handle);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Skip protocol initialisation for additional devices */
|
||||
idr_replace(&scmi_protocols, &scmi_protocol_dummy_init,
|
||||
scmi_dev->protocol_id);
|
||||
|
||||
return scmi_drv->probe(scmi_dev);
|
||||
}
|
||||
|
||||
@ -113,6 +139,10 @@ int scmi_driver_register(struct scmi_driver *driver, struct module *owner,
|
||||
{
|
||||
int retval;
|
||||
|
||||
retval = scmi_request_protocol_device(driver->id_table);
|
||||
if (retval)
|
||||
return retval;
|
||||
|
||||
driver->driver.bus = &scmi_bus_type;
|
||||
driver->driver.name = driver->name;
|
||||
driver->driver.owner = owner;
|
||||
@ -129,6 +159,7 @@ EXPORT_SYMBOL_GPL(scmi_driver_register);
|
||||
void scmi_driver_unregister(struct scmi_driver *driver)
|
||||
{
|
||||
driver_unregister(&driver->driver);
|
||||
scmi_unrequest_protocol_device(driver->id_table);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(scmi_driver_unregister);
|
||||
|
||||
@ -194,26 +225,45 @@ void scmi_set_handle(struct scmi_device *scmi_dev)
|
||||
scmi_dev->handle = scmi_handle_get(&scmi_dev->dev);
|
||||
}
|
||||
|
||||
int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn)
|
||||
int scmi_protocol_register(const struct scmi_protocol *proto)
|
||||
{
|
||||
int ret;
|
||||
|
||||
spin_lock(&protocol_lock);
|
||||
ret = idr_alloc(&scmi_protocols, fn, protocol_id, protocol_id + 1,
|
||||
GFP_ATOMIC);
|
||||
spin_unlock(&protocol_lock);
|
||||
if (ret != protocol_id)
|
||||
pr_err("unable to allocate SCMI idr slot, err %d\n", ret);
|
||||
if (!proto) {
|
||||
pr_err("invalid protocol\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!proto->init_instance) {
|
||||
pr_err("missing .init() for protocol 0x%x\n", proto->id);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
spin_lock(&protocol_lock);
|
||||
ret = idr_alloc(&scmi_available_protocols, (void *)proto,
|
||||
proto->id, proto->id + 1, GFP_ATOMIC);
|
||||
spin_unlock(&protocol_lock);
|
||||
if (ret != proto->id) {
|
||||
pr_err("unable to allocate SCMI idr slot for 0x%x - err %d\n",
|
||||
proto->id, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
pr_debug("Registered SCMI Protocol 0x%x\n", proto->id);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(scmi_protocol_register);
|
||||
|
||||
void scmi_protocol_unregister(int protocol_id)
|
||||
void scmi_protocol_unregister(const struct scmi_protocol *proto)
|
||||
{
|
||||
spin_lock(&protocol_lock);
|
||||
idr_remove(&scmi_protocols, protocol_id);
|
||||
idr_remove(&scmi_available_protocols, proto->id);
|
||||
spin_unlock(&protocol_lock);
|
||||
|
||||
pr_debug("Unregistered SCMI Protocol 0x%x\n", proto->id);
|
||||
|
||||
return;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(scmi_protocol_unregister);
|
||||
|
||||
|
@ -2,9 +2,10 @@
|
||||
/*
|
||||
* System Control and Management Interface (SCMI) Clock Protocol
|
||||
*
|
||||
* Copyright (C) 2018 ARM Ltd.
|
||||
* Copyright (C) 2018-2020 ARM Ltd.
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/sort.h>
|
||||
|
||||
#include "common.h"
|
||||
@ -74,38 +75,39 @@ struct clock_info {
|
||||
struct scmi_clock_info *clk;
|
||||
};
|
||||
|
||||
static int scmi_clock_protocol_attributes_get(const struct scmi_handle *handle,
|
||||
static int
|
||||
scmi_clock_protocol_attributes_get(const struct scmi_protocol_handle *ph,
|
||||
struct clock_info *ci)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_msg_resp_clock_protocol_attributes *attr;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
|
||||
SCMI_PROTOCOL_CLOCK, 0, sizeof(*attr), &t);
|
||||
ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES,
|
||||
0, sizeof(*attr), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
attr = t->rx.buf;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret) {
|
||||
ci->num_clocks = le16_to_cpu(attr->num_clocks);
|
||||
ci->max_async_req = attr->max_async_req;
|
||||
}
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_clock_attributes_get(const struct scmi_handle *handle,
|
||||
static int scmi_clock_attributes_get(const struct scmi_protocol_handle *ph,
|
||||
u32 clk_id, struct scmi_clock_info *clk)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_msg_resp_clock_attributes *attr;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, CLOCK_ATTRIBUTES, SCMI_PROTOCOL_CLOCK,
|
||||
ret = ph->xops->xfer_get_init(ph, CLOCK_ATTRIBUTES,
|
||||
sizeof(clk_id), sizeof(*attr), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
@ -113,13 +115,13 @@ static int scmi_clock_attributes_get(const struct scmi_handle *handle,
|
||||
put_unaligned_le32(clk_id, t->tx.buf);
|
||||
attr = t->rx.buf;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret)
|
||||
strlcpy(clk->name, attr->name, SCMI_MAX_STR_SIZE);
|
||||
else
|
||||
clk->name[0] = '\0';
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -136,7 +138,7 @@ static int rate_cmp_func(const void *_r1, const void *_r2)
|
||||
}
|
||||
|
||||
static int
|
||||
scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
|
||||
scmi_clock_describe_rates_get(const struct scmi_protocol_handle *ph, u32 clk_id,
|
||||
struct scmi_clock_info *clk)
|
||||
{
|
||||
u64 *rate = NULL;
|
||||
@ -148,8 +150,8 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
|
||||
struct scmi_msg_clock_describe_rates *clk_desc;
|
||||
struct scmi_msg_resp_clock_describe_rates *rlist;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, CLOCK_DESCRIBE_RATES,
|
||||
SCMI_PROTOCOL_CLOCK, sizeof(*clk_desc), 0, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, CLOCK_DESCRIBE_RATES,
|
||||
sizeof(*clk_desc), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -161,7 +163,7 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
|
||||
/* Set the number of rates to be skipped/already read */
|
||||
clk_desc->rate_index = cpu_to_le32(tot_rate_cnt);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
@ -171,7 +173,7 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
|
||||
num_returned = NUM_RETURNED(rates_flag);
|
||||
|
||||
if (tot_rate_cnt + num_returned > SCMI_MAX_NUM_RATES) {
|
||||
dev_err(handle->dev, "No. of rates > MAX_NUM_RATES");
|
||||
dev_err(ph->dev, "No. of rates > MAX_NUM_RATES");
|
||||
break;
|
||||
}
|
||||
|
||||
@ -179,7 +181,7 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
|
||||
clk->range.min_rate = RATE_TO_U64(rlist->rate[0]);
|
||||
clk->range.max_rate = RATE_TO_U64(rlist->rate[1]);
|
||||
clk->range.step_size = RATE_TO_U64(rlist->rate[2]);
|
||||
dev_dbg(handle->dev, "Min %llu Max %llu Step %llu Hz\n",
|
||||
dev_dbg(ph->dev, "Min %llu Max %llu Step %llu Hz\n",
|
||||
clk->range.min_rate, clk->range.max_rate,
|
||||
clk->range.step_size);
|
||||
break;
|
||||
@ -188,12 +190,12 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
|
||||
rate = &clk->list.rates[tot_rate_cnt];
|
||||
for (cnt = 0; cnt < num_returned; cnt++, rate++) {
|
||||
*rate = RATE_TO_U64(rlist->rate[cnt]);
|
||||
dev_dbg(handle->dev, "Rate %llu Hz\n", *rate);
|
||||
dev_dbg(ph->dev, "Rate %llu Hz\n", *rate);
|
||||
}
|
||||
|
||||
tot_rate_cnt += num_returned;
|
||||
|
||||
scmi_reset_rx_to_maxsz(handle, t);
|
||||
ph->xops->reset_rx_to_maxsz(ph, t);
|
||||
/*
|
||||
* check for both returned and remaining to avoid infinite
|
||||
* loop due to buggy firmware
|
||||
@ -208,42 +210,42 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
|
||||
clk->rate_discrete = rate_discrete;
|
||||
|
||||
err:
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
scmi_clock_rate_get(const struct scmi_handle *handle, u32 clk_id, u64 *value)
|
||||
scmi_clock_rate_get(const struct scmi_protocol_handle *ph,
|
||||
u32 clk_id, u64 *value)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, CLOCK_RATE_GET, SCMI_PROTOCOL_CLOCK,
|
||||
ret = ph->xops->xfer_get_init(ph, CLOCK_RATE_GET,
|
||||
sizeof(__le32), sizeof(u64), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
put_unaligned_le32(clk_id, t->tx.buf);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret)
|
||||
*value = get_unaligned_le64(t->rx.buf);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_clock_rate_set(const struct scmi_handle *handle, u32 clk_id,
|
||||
u64 rate)
|
||||
static int scmi_clock_rate_set(const struct scmi_protocol_handle *ph,
|
||||
u32 clk_id, u64 rate)
|
||||
{
|
||||
int ret;
|
||||
u32 flags = 0;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_clock_set_rate *cfg;
|
||||
struct clock_info *ci = handle->clk_priv;
|
||||
struct clock_info *ci = ph->get_priv(ph);
|
||||
|
||||
ret = scmi_xfer_get_init(handle, CLOCK_RATE_SET, SCMI_PROTOCOL_CLOCK,
|
||||
sizeof(*cfg), 0, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, CLOCK_RATE_SET, sizeof(*cfg), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -258,25 +260,26 @@ static int scmi_clock_rate_set(const struct scmi_handle *handle, u32 clk_id,
|
||||
cfg->value_high = cpu_to_le32(rate >> 32);
|
||||
|
||||
if (flags & CLOCK_SET_ASYNC)
|
||||
ret = scmi_do_xfer_with_response(handle, t);
|
||||
ret = ph->xops->do_xfer_with_response(ph, t);
|
||||
else
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
|
||||
if (ci->max_async_req)
|
||||
atomic_dec(&ci->cur_async_req);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
scmi_clock_config_set(const struct scmi_handle *handle, u32 clk_id, u32 config)
|
||||
scmi_clock_config_set(const struct scmi_protocol_handle *ph, u32 clk_id,
|
||||
u32 config)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_clock_set_config *cfg;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, CLOCK_CONFIG_SET, SCMI_PROTOCOL_CLOCK,
|
||||
ret = ph->xops->xfer_get_init(ph, CLOCK_CONFIG_SET,
|
||||
sizeof(*cfg), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
@ -285,33 +288,33 @@ scmi_clock_config_set(const struct scmi_handle *handle, u32 clk_id, u32 config)
|
||||
cfg->id = cpu_to_le32(clk_id);
|
||||
cfg->attributes = cpu_to_le32(config);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_clock_enable(const struct scmi_handle *handle, u32 clk_id)
|
||||
static int scmi_clock_enable(const struct scmi_protocol_handle *ph, u32 clk_id)
|
||||
{
|
||||
return scmi_clock_config_set(handle, clk_id, CLOCK_ENABLE);
|
||||
return scmi_clock_config_set(ph, clk_id, CLOCK_ENABLE);
|
||||
}
|
||||
|
||||
static int scmi_clock_disable(const struct scmi_handle *handle, u32 clk_id)
|
||||
static int scmi_clock_disable(const struct scmi_protocol_handle *ph, u32 clk_id)
|
||||
{
|
||||
return scmi_clock_config_set(handle, clk_id, 0);
|
||||
return scmi_clock_config_set(ph, clk_id, 0);
|
||||
}
|
||||
|
||||
static int scmi_clock_count_get(const struct scmi_handle *handle)
|
||||
static int scmi_clock_count_get(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
struct clock_info *ci = handle->clk_priv;
|
||||
struct clock_info *ci = ph->get_priv(ph);
|
||||
|
||||
return ci->num_clocks;
|
||||
}
|
||||
|
||||
static const struct scmi_clock_info *
|
||||
scmi_clock_info_get(const struct scmi_handle *handle, u32 clk_id)
|
||||
scmi_clock_info_get(const struct scmi_protocol_handle *ph, u32 clk_id)
|
||||
{
|
||||
struct clock_info *ci = handle->clk_priv;
|
||||
struct clock_info *ci = ph->get_priv(ph);
|
||||
struct scmi_clock_info *clk = ci->clk + clk_id;
|
||||
|
||||
if (!clk->name[0])
|
||||
@ -320,7 +323,7 @@ scmi_clock_info_get(const struct scmi_handle *handle, u32 clk_id)
|
||||
return clk;
|
||||
}
|
||||
|
||||
static const struct scmi_clk_ops clk_ops = {
|
||||
static const struct scmi_clk_proto_ops clk_proto_ops = {
|
||||
.count_get = scmi_clock_count_get,
|
||||
.info_get = scmi_clock_info_get,
|
||||
.rate_get = scmi_clock_rate_get,
|
||||
@ -329,24 +332,24 @@ static const struct scmi_clk_ops clk_ops = {
|
||||
.disable = scmi_clock_disable,
|
||||
};
|
||||
|
||||
static int scmi_clock_protocol_init(struct scmi_handle *handle)
|
||||
static int scmi_clock_protocol_init(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
u32 version;
|
||||
int clkid, ret;
|
||||
struct clock_info *cinfo;
|
||||
|
||||
scmi_version_get(handle, SCMI_PROTOCOL_CLOCK, &version);
|
||||
ph->xops->version_get(ph, &version);
|
||||
|
||||
dev_dbg(handle->dev, "Clock Version %d.%d\n",
|
||||
dev_dbg(ph->dev, "Clock Version %d.%d\n",
|
||||
PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
|
||||
|
||||
cinfo = devm_kzalloc(handle->dev, sizeof(*cinfo), GFP_KERNEL);
|
||||
cinfo = devm_kzalloc(ph->dev, sizeof(*cinfo), GFP_KERNEL);
|
||||
if (!cinfo)
|
||||
return -ENOMEM;
|
||||
|
||||
scmi_clock_protocol_attributes_get(handle, cinfo);
|
||||
scmi_clock_protocol_attributes_get(ph, cinfo);
|
||||
|
||||
cinfo->clk = devm_kcalloc(handle->dev, cinfo->num_clocks,
|
||||
cinfo->clk = devm_kcalloc(ph->dev, cinfo->num_clocks,
|
||||
sizeof(*cinfo->clk), GFP_KERNEL);
|
||||
if (!cinfo->clk)
|
||||
return -ENOMEM;
|
||||
@ -354,16 +357,20 @@ static int scmi_clock_protocol_init(struct scmi_handle *handle)
|
||||
for (clkid = 0; clkid < cinfo->num_clocks; clkid++) {
|
||||
struct scmi_clock_info *clk = cinfo->clk + clkid;
|
||||
|
||||
ret = scmi_clock_attributes_get(handle, clkid, clk);
|
||||
ret = scmi_clock_attributes_get(ph, clkid, clk);
|
||||
if (!ret)
|
||||
scmi_clock_describe_rates_get(handle, clkid, clk);
|
||||
scmi_clock_describe_rates_get(ph, clkid, clk);
|
||||
}
|
||||
|
||||
cinfo->version = version;
|
||||
handle->clk_ops = &clk_ops;
|
||||
handle->clk_priv = cinfo;
|
||||
|
||||
return 0;
|
||||
return ph->set_priv(ph, cinfo);
|
||||
}
|
||||
|
||||
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_CLOCK, clock)
|
||||
static const struct scmi_protocol scmi_clock = {
|
||||
.id = SCMI_PROTOCOL_CLOCK,
|
||||
.owner = THIS_MODULE,
|
||||
.init_instance = &scmi_clock_protocol_init,
|
||||
.ops = &clk_proto_ops,
|
||||
};
|
||||
|
||||
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(clock, scmi_clock)
|
||||
|
@ -14,11 +14,14 @@
|
||||
#include <linux/device.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/scmi_protocol.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include <asm/unaligned.h>
|
||||
|
||||
#include "notify.h"
|
||||
|
||||
#define PROTOCOL_REV_MINOR_MASK GENMASK(15, 0)
|
||||
#define PROTOCOL_REV_MAJOR_MASK GENMASK(31, 16)
|
||||
#define PROTOCOL_REV_MAJOR(x) (u16)(FIELD_GET(PROTOCOL_REV_MAJOR_MASK, (x)))
|
||||
@ -141,22 +144,92 @@ struct scmi_xfer {
|
||||
struct completion *async_done;
|
||||
};
|
||||
|
||||
void scmi_xfer_put(const struct scmi_handle *h, struct scmi_xfer *xfer);
|
||||
int scmi_do_xfer(const struct scmi_handle *h, struct scmi_xfer *xfer);
|
||||
int scmi_do_xfer_with_response(const struct scmi_handle *h,
|
||||
struct scmi_xfer_ops;
|
||||
|
||||
/**
|
||||
* struct scmi_protocol_handle - Reference to an initialized protocol instance
|
||||
*
|
||||
* @dev: A reference to the associated SCMI instance device (handle->dev).
|
||||
* @xops: A reference to a struct holding refs to the core xfer operations that
|
||||
* can be used by the protocol implementation to generate SCMI messages.
|
||||
* @set_priv: A method to set protocol private data for this instance.
|
||||
* @get_priv: A method to get protocol private data previously set.
|
||||
*
|
||||
* This structure represents a protocol initialized against specific SCMI
|
||||
* instance and it will be used as follows:
|
||||
* - as a parameter fed from the core to the protocol initialization code so
|
||||
* that it can access the core xfer operations to build and generate SCMI
|
||||
* messages exclusively for the specific underlying protocol instance.
|
||||
* - as an opaque handle fed by an SCMI driver user when it tries to access
|
||||
* this protocol through its own protocol operations.
|
||||
* In this case this handle will be returned as an opaque object together
|
||||
* with the related protocol operations when the SCMI driver tries to access
|
||||
* the protocol.
|
||||
*/
|
||||
struct scmi_protocol_handle {
|
||||
struct device *dev;
|
||||
const struct scmi_xfer_ops *xops;
|
||||
int (*set_priv)(const struct scmi_protocol_handle *ph, void *priv);
|
||||
void *(*get_priv)(const struct scmi_protocol_handle *ph);
|
||||
};
|
||||
|
||||
/**
|
||||
* struct scmi_xfer_ops - References to the core SCMI xfer operations.
|
||||
* @version_get: Get this version protocol.
|
||||
* @xfer_get_init: Initialize one struct xfer if any xfer slot is free.
|
||||
* @reset_rx_to_maxsz: Reset rx size to max transport size.
|
||||
* @do_xfer: Do the SCMI transfer.
|
||||
* @do_xfer_with_response: Do the SCMI transfer waiting for a response.
|
||||
* @xfer_put: Free the xfer slot.
|
||||
*
|
||||
* Note that all this operations expect a protocol handle as first parameter;
|
||||
* they then internally use it to infer the underlying protocol number: this
|
||||
* way is not possible for a protocol implementation to forge messages for
|
||||
* another protocol.
|
||||
*/
|
||||
struct scmi_xfer_ops {
|
||||
int (*version_get)(const struct scmi_protocol_handle *ph, u32 *version);
|
||||
int (*xfer_get_init)(const struct scmi_protocol_handle *ph, u8 msg_id,
|
||||
size_t tx_size, size_t rx_size,
|
||||
struct scmi_xfer **p);
|
||||
void (*reset_rx_to_maxsz)(const struct scmi_protocol_handle *ph,
|
||||
struct scmi_xfer *xfer);
|
||||
int scmi_xfer_get_init(const struct scmi_handle *h, u8 msg_id, u8 prot_id,
|
||||
size_t tx_size, size_t rx_size, struct scmi_xfer **p);
|
||||
void scmi_reset_rx_to_maxsz(const struct scmi_handle *handle,
|
||||
int (*do_xfer)(const struct scmi_protocol_handle *ph,
|
||||
struct scmi_xfer *xfer);
|
||||
int (*do_xfer_with_response)(const struct scmi_protocol_handle *ph,
|
||||
struct scmi_xfer *xfer);
|
||||
void (*xfer_put)(const struct scmi_protocol_handle *ph,
|
||||
struct scmi_xfer *xfer);
|
||||
};
|
||||
|
||||
struct scmi_revision_info *
|
||||
scmi_get_revision_area(const struct scmi_protocol_handle *ph);
|
||||
int scmi_handle_put(const struct scmi_handle *handle);
|
||||
struct scmi_handle *scmi_handle_get(struct device *dev);
|
||||
void scmi_set_handle(struct scmi_device *scmi_dev);
|
||||
int scmi_version_get(const struct scmi_handle *h, u8 protocol, u32 *version);
|
||||
void scmi_setup_protocol_implemented(const struct scmi_handle *handle,
|
||||
void scmi_setup_protocol_implemented(const struct scmi_protocol_handle *ph,
|
||||
u8 *prot_imp);
|
||||
|
||||
int scmi_base_protocol_init(struct scmi_handle *h);
|
||||
typedef int (*scmi_prot_init_ph_fn_t)(const struct scmi_protocol_handle *);
|
||||
|
||||
/**
|
||||
* struct scmi_protocol - Protocol descriptor
|
||||
* @id: Protocol ID.
|
||||
* @owner: Module reference if any.
|
||||
* @init_instance: Mandatory protocol initialization function.
|
||||
* @deinit_instance: Optional protocol de-initialization function.
|
||||
* @ops: Optional reference to the operations provided by the protocol and
|
||||
* exposed in scmi_protocol.h.
|
||||
* @events: An optional reference to the events supported by this protocol.
|
||||
*/
|
||||
struct scmi_protocol {
|
||||
const u8 id;
|
||||
struct module *owner;
|
||||
const scmi_prot_init_ph_fn_t init_instance;
|
||||
const scmi_prot_init_ph_fn_t deinit_instance;
|
||||
const void *ops;
|
||||
const struct scmi_protocol_events *events;
|
||||
};
|
||||
|
||||
int __init scmi_bus_init(void);
|
||||
void __exit scmi_bus_exit(void);
|
||||
@ -164,24 +237,32 @@ void __exit scmi_bus_exit(void);
|
||||
#define DECLARE_SCMI_REGISTER_UNREGISTER(func) \
|
||||
int __init scmi_##func##_register(void); \
|
||||
void __exit scmi_##func##_unregister(void)
|
||||
DECLARE_SCMI_REGISTER_UNREGISTER(base);
|
||||
DECLARE_SCMI_REGISTER_UNREGISTER(clock);
|
||||
DECLARE_SCMI_REGISTER_UNREGISTER(perf);
|
||||
DECLARE_SCMI_REGISTER_UNREGISTER(power);
|
||||
DECLARE_SCMI_REGISTER_UNREGISTER(reset);
|
||||
DECLARE_SCMI_REGISTER_UNREGISTER(sensors);
|
||||
DECLARE_SCMI_REGISTER_UNREGISTER(voltage);
|
||||
DECLARE_SCMI_REGISTER_UNREGISTER(system);
|
||||
|
||||
#define DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(id, name) \
|
||||
#define DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(name, proto) \
|
||||
int __init scmi_##name##_register(void) \
|
||||
{ \
|
||||
return scmi_protocol_register((id), &scmi_##name##_protocol_init); \
|
||||
return scmi_protocol_register(&(proto)); \
|
||||
} \
|
||||
\
|
||||
void __exit scmi_##name##_unregister(void) \
|
||||
{ \
|
||||
scmi_protocol_unregister((id)); \
|
||||
scmi_protocol_unregister(&(proto)); \
|
||||
}
|
||||
|
||||
const struct scmi_protocol *scmi_get_protocol(int protocol_id);
|
||||
void scmi_put_protocol(int protocol_id);
|
||||
|
||||
int scmi_acquire_protocol(const struct scmi_handle *handle, u8 protocol_id);
|
||||
void scmi_release_protocol(const struct scmi_handle *handle, u8 protocol_id);
|
||||
|
||||
/* SCMI Transport */
|
||||
/**
|
||||
* struct scmi_chan_info - Structure representing a SCMI channel information
|
||||
@ -226,6 +307,11 @@ struct scmi_transport_ops {
|
||||
bool (*poll_done)(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer);
|
||||
};
|
||||
|
||||
int scmi_request_protocol_device(const struct scmi_device_id *id_table);
|
||||
void scmi_unrequest_protocol_device(const struct scmi_device_id *id_table);
|
||||
struct scmi_device *scmi_find_child_dev(struct device *parent,
|
||||
int prot_id, const char *name);
|
||||
|
||||
/**
|
||||
* struct scmi_desc - Description of SoC integration
|
||||
*
|
||||
@ -264,4 +350,8 @@ void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem);
|
||||
bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
|
||||
struct scmi_xfer *xfer);
|
||||
|
||||
void scmi_set_notification_instance_data(const struct scmi_handle *handle,
|
||||
void *priv);
|
||||
void *scmi_get_notification_instance_data(const struct scmi_handle *handle);
|
||||
|
||||
#endif /* _SCMI_COMMON_H */
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -91,6 +91,7 @@
|
||||
#include <linux/types.h>
|
||||
#include <linux/workqueue.h>
|
||||
|
||||
#include "common.h"
|
||||
#include "notify.h"
|
||||
|
||||
#define SCMI_MAX_PROTO 256
|
||||
@ -177,7 +178,7 @@
|
||||
#define REVT_NOTIFY_SET_STATUS(revt, eid, sid, state) \
|
||||
({ \
|
||||
typeof(revt) r = revt; \
|
||||
r->proto->ops->set_notify_enabled(r->proto->ni->handle, \
|
||||
r->proto->ops->set_notify_enabled(r->proto->ph, \
|
||||
(eid), (sid), (state)); \
|
||||
})
|
||||
|
||||
@ -190,7 +191,7 @@
|
||||
#define REVT_FILL_REPORT(revt, ...) \
|
||||
({ \
|
||||
typeof(revt) r = revt; \
|
||||
r->proto->ops->fill_custom_report(r->proto->ni->handle, \
|
||||
r->proto->ops->fill_custom_report(r->proto->ph, \
|
||||
__VA_ARGS__); \
|
||||
})
|
||||
|
||||
@ -278,6 +279,7 @@ struct scmi_registered_event;
|
||||
* events' descriptors, whose fixed-size is determined at
|
||||
* compile time.
|
||||
* @registered_mtx: A mutex to protect @registered_events_handlers
|
||||
* @ph: SCMI protocol handle reference
|
||||
* @registered_events_handlers: An hashtable containing all events' handlers
|
||||
* descriptors registered for this protocol
|
||||
*
|
||||
@ -302,6 +304,7 @@ struct scmi_registered_events_desc {
|
||||
struct scmi_registered_event **registered_events;
|
||||
/* mutex to protect registered_events_handlers */
|
||||
struct mutex registered_mtx;
|
||||
const struct scmi_protocol_handle *ph;
|
||||
DECLARE_HASHTABLE(registered_events_handlers, SCMI_REGISTERED_HASH_SZ);
|
||||
};
|
||||
|
||||
@ -368,7 +371,7 @@ static struct scmi_event_handler *
|
||||
scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key);
|
||||
static void scmi_put_active_handler(struct scmi_notify_instance *ni,
|
||||
struct scmi_event_handler *hndl);
|
||||
static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
|
||||
static bool scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
|
||||
struct scmi_event_handler *hndl);
|
||||
|
||||
/**
|
||||
@ -579,11 +582,9 @@ int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
|
||||
struct scmi_event_header eh;
|
||||
struct scmi_notify_instance *ni;
|
||||
|
||||
/* Ensure notify_priv is updated */
|
||||
smp_rmb();
|
||||
if (!handle->notify_priv)
|
||||
ni = scmi_get_notification_instance_data(handle);
|
||||
if (!ni)
|
||||
return 0;
|
||||
ni = handle->notify_priv;
|
||||
|
||||
r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
|
||||
if (!r_evt)
|
||||
@ -732,14 +733,10 @@ scmi_allocate_registered_events_desc(struct scmi_notify_instance *ni,
|
||||
/**
|
||||
* scmi_register_protocol_events() - Register Protocol Events with the core
|
||||
* @handle: The handle identifying the platform instance against which the
|
||||
* the protocol's events are registered
|
||||
* protocol's events are registered
|
||||
* @proto_id: Protocol ID
|
||||
* @queue_sz: Size in bytes of the associated queue to be allocated
|
||||
* @ops: Protocol specific event-related operations
|
||||
* @evt: Event descriptor array
|
||||
* @num_events: Number of events in @evt array
|
||||
* @num_sources: Number of possible sources for this protocol on this
|
||||
* platform.
|
||||
* @ph: SCMI protocol handle.
|
||||
* @ee: A structure describing the events supported by this protocol.
|
||||
*
|
||||
* Used by SCMI Protocols initialization code to register with the notification
|
||||
* core the list of supported events and their descriptors: takes care to
|
||||
@ -748,40 +745,49 @@ scmi_allocate_registered_events_desc(struct scmi_notify_instance *ni,
|
||||
*
|
||||
* Return: 0 on Success
|
||||
*/
|
||||
int scmi_register_protocol_events(const struct scmi_handle *handle,
|
||||
u8 proto_id, size_t queue_sz,
|
||||
const struct scmi_event_ops *ops,
|
||||
const struct scmi_event *evt, int num_events,
|
||||
int num_sources)
|
||||
int scmi_register_protocol_events(const struct scmi_handle *handle, u8 proto_id,
|
||||
const struct scmi_protocol_handle *ph,
|
||||
const struct scmi_protocol_events *ee)
|
||||
{
|
||||
int i;
|
||||
unsigned int num_sources;
|
||||
size_t payld_sz = 0;
|
||||
struct scmi_registered_events_desc *pd;
|
||||
struct scmi_notify_instance *ni;
|
||||
const struct scmi_event *evt;
|
||||
|
||||
if (!ops || !evt)
|
||||
if (!ee || !ee->ops || !ee->evts || !ph ||
|
||||
(!ee->num_sources && !ee->ops->get_num_sources))
|
||||
return -EINVAL;
|
||||
|
||||
/* Ensure notify_priv is updated */
|
||||
smp_rmb();
|
||||
if (!handle->notify_priv)
|
||||
return -ENOMEM;
|
||||
ni = handle->notify_priv;
|
||||
|
||||
/* Attach to the notification main devres group */
|
||||
if (!devres_open_group(ni->handle->dev, ni->gid, GFP_KERNEL))
|
||||
ni = scmi_get_notification_instance_data(handle);
|
||||
if (!ni)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < num_events; i++)
|
||||
/* num_sources cannot be <= 0 */
|
||||
if (ee->num_sources) {
|
||||
num_sources = ee->num_sources;
|
||||
} else {
|
||||
int nsrc = ee->ops->get_num_sources(ph);
|
||||
|
||||
if (nsrc <= 0)
|
||||
return -EINVAL;
|
||||
num_sources = nsrc;
|
||||
}
|
||||
|
||||
evt = ee->evts;
|
||||
for (i = 0; i < ee->num_events; i++)
|
||||
payld_sz = max_t(size_t, payld_sz, evt[i].max_payld_sz);
|
||||
payld_sz += sizeof(struct scmi_event_header);
|
||||
|
||||
pd = scmi_allocate_registered_events_desc(ni, proto_id, queue_sz,
|
||||
payld_sz, num_events, ops);
|
||||
pd = scmi_allocate_registered_events_desc(ni, proto_id, ee->queue_sz,
|
||||
payld_sz, ee->num_events,
|
||||
ee->ops);
|
||||
if (IS_ERR(pd))
|
||||
goto err;
|
||||
|
||||
for (i = 0; i < num_events; i++, evt++) {
|
||||
pd->ph = ph;
|
||||
for (i = 0; i < ee->num_events; i++, evt++) {
|
||||
struct scmi_registered_event *r_evt;
|
||||
|
||||
r_evt = devm_kzalloc(ni->handle->dev, sizeof(*r_evt),
|
||||
@ -815,8 +821,6 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
|
||||
/* Ensure protocols are updated */
|
||||
smp_wmb();
|
||||
|
||||
devres_close_group(ni->handle->dev, ni->gid);
|
||||
|
||||
/*
|
||||
* Finalize any pending events' handler which could have been waiting
|
||||
* for this protocol's events registration.
|
||||
@ -827,12 +831,37 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
|
||||
|
||||
err:
|
||||
dev_warn(handle->dev, "Proto:%X - Registration Failed !\n", proto_id);
|
||||
/* A failing protocol registration does not trigger full failure */
|
||||
devres_close_group(ni->handle->dev, ni->gid);
|
||||
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/**
|
||||
* scmi_deregister_protocol_events - Deregister protocol events with the core
|
||||
* @handle: The handle identifying the platform instance against which the
|
||||
* protocol's events are registered
|
||||
* @proto_id: Protocol ID
|
||||
*/
|
||||
void scmi_deregister_protocol_events(const struct scmi_handle *handle,
|
||||
u8 proto_id)
|
||||
{
|
||||
struct scmi_notify_instance *ni;
|
||||
struct scmi_registered_events_desc *pd;
|
||||
|
||||
ni = scmi_get_notification_instance_data(handle);
|
||||
if (!ni)
|
||||
return;
|
||||
|
||||
pd = ni->registered_protocols[proto_id];
|
||||
if (!pd)
|
||||
return;
|
||||
|
||||
ni->registered_protocols[proto_id] = NULL;
|
||||
/* Ensure protocols are updated */
|
||||
smp_wmb();
|
||||
|
||||
cancel_work_sync(&pd->equeue.notify_work);
|
||||
}
|
||||
|
||||
/**
|
||||
* scmi_allocate_event_handler() - Allocate Event handler
|
||||
* @ni: A reference to the notification instance to use
|
||||
@ -900,9 +929,21 @@ static inline int scmi_bind_event_handler(struct scmi_notify_instance *ni,
|
||||
if (!r_evt)
|
||||
return -EINVAL;
|
||||
|
||||
/* Remove from pending and insert into registered */
|
||||
/*
|
||||
* Remove from pending and insert into registered while getting hold
|
||||
* of protocol instance.
|
||||
*/
|
||||
hash_del(&hndl->hash);
|
||||
/*
|
||||
* Acquire protocols only for NON pending handlers, so as NOT to trigger
|
||||
* protocol initialization when a notifier is registered against a still
|
||||
* not registered protocol, since it would make little sense to force init
|
||||
* protocols for which still no SCMI driver user exists: they wouldn't
|
||||
* emit any event anyway till some SCMI driver starts using it.
|
||||
*/
|
||||
scmi_acquire_protocol(ni->handle, KEY_XTRACT_PROTO_ID(hndl->key));
|
||||
hndl->r_evt = r_evt;
|
||||
|
||||
mutex_lock(&r_evt->proto->registered_mtx);
|
||||
hash_add(r_evt->proto->registered_events_handlers,
|
||||
&hndl->hash, hndl->key);
|
||||
@ -1193,41 +1234,65 @@ static int scmi_disable_events(struct scmi_event_handler *hndl)
|
||||
* * unregister and free the handler itself
|
||||
*
|
||||
* Context: Assumes all the proper locking has been managed by the caller.
|
||||
*
|
||||
* Return: True if handler was freed (users dropped to zero)
|
||||
*/
|
||||
static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
|
||||
static bool scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
|
||||
struct scmi_event_handler *hndl)
|
||||
{
|
||||
bool freed = false;
|
||||
|
||||
if (refcount_dec_and_test(&hndl->users)) {
|
||||
if (!IS_HNDL_PENDING(hndl))
|
||||
scmi_disable_events(hndl);
|
||||
scmi_free_event_handler(hndl);
|
||||
freed = true;
|
||||
}
|
||||
|
||||
return freed;
|
||||
}
|
||||
|
||||
static void scmi_put_handler(struct scmi_notify_instance *ni,
|
||||
struct scmi_event_handler *hndl)
|
||||
{
|
||||
bool freed;
|
||||
u8 protocol_id;
|
||||
struct scmi_registered_event *r_evt = hndl->r_evt;
|
||||
|
||||
mutex_lock(&ni->pending_mtx);
|
||||
if (r_evt)
|
||||
if (r_evt) {
|
||||
protocol_id = r_evt->proto->id;
|
||||
mutex_lock(&r_evt->proto->registered_mtx);
|
||||
}
|
||||
|
||||
scmi_put_handler_unlocked(ni, hndl);
|
||||
freed = scmi_put_handler_unlocked(ni, hndl);
|
||||
|
||||
if (r_evt)
|
||||
if (r_evt) {
|
||||
mutex_unlock(&r_evt->proto->registered_mtx);
|
||||
/*
|
||||
* Only registered handler acquired protocol; must be here
|
||||
* released only AFTER unlocking registered_mtx, since
|
||||
* releasing a protocol can trigger its de-initialization
|
||||
* (ie. including r_evt and registered_mtx)
|
||||
*/
|
||||
if (freed)
|
||||
scmi_release_protocol(ni->handle, protocol_id);
|
||||
}
|
||||
mutex_unlock(&ni->pending_mtx);
|
||||
}
|
||||
|
||||
static void scmi_put_active_handler(struct scmi_notify_instance *ni,
|
||||
struct scmi_event_handler *hndl)
|
||||
{
|
||||
bool freed;
|
||||
struct scmi_registered_event *r_evt = hndl->r_evt;
|
||||
u8 protocol_id = r_evt->proto->id;
|
||||
|
||||
mutex_lock(&r_evt->proto->registered_mtx);
|
||||
scmi_put_handler_unlocked(ni, hndl);
|
||||
freed = scmi_put_handler_unlocked(ni, hndl);
|
||||
mutex_unlock(&r_evt->proto->registered_mtx);
|
||||
if (freed)
|
||||
scmi_release_protocol(ni->handle, protocol_id);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -1288,11 +1353,9 @@ static int scmi_register_notifier(const struct scmi_handle *handle,
|
||||
struct scmi_event_handler *hndl;
|
||||
struct scmi_notify_instance *ni;
|
||||
|
||||
/* Ensure notify_priv is updated */
|
||||
smp_rmb();
|
||||
if (!handle->notify_priv)
|
||||
ni = scmi_get_notification_instance_data(handle);
|
||||
if (!ni)
|
||||
return -ENODEV;
|
||||
ni = handle->notify_priv;
|
||||
|
||||
evt_key = MAKE_HASH_KEY(proto_id, evt_id,
|
||||
src_id ? *src_id : SRC_ID_MASK);
|
||||
@ -1336,11 +1399,9 @@ static int scmi_unregister_notifier(const struct scmi_handle *handle,
|
||||
struct scmi_event_handler *hndl;
|
||||
struct scmi_notify_instance *ni;
|
||||
|
||||
/* Ensure notify_priv is updated */
|
||||
smp_rmb();
|
||||
if (!handle->notify_priv)
|
||||
ni = scmi_get_notification_instance_data(handle);
|
||||
if (!ni)
|
||||
return -ENODEV;
|
||||
ni = handle->notify_priv;
|
||||
|
||||
evt_key = MAKE_HASH_KEY(proto_id, evt_id,
|
||||
src_id ? *src_id : SRC_ID_MASK);
|
||||
@ -1371,6 +1432,127 @@ static int scmi_unregister_notifier(const struct scmi_handle *handle,
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct scmi_notifier_devres {
|
||||
const struct scmi_handle *handle;
|
||||
u8 proto_id;
|
||||
u8 evt_id;
|
||||
u32 __src_id;
|
||||
u32 *src_id;
|
||||
struct notifier_block *nb;
|
||||
};
|
||||
|
||||
static void scmi_devm_release_notifier(struct device *dev, void *res)
|
||||
{
|
||||
struct scmi_notifier_devres *dres = res;
|
||||
|
||||
scmi_unregister_notifier(dres->handle, dres->proto_id, dres->evt_id,
|
||||
dres->src_id, dres->nb);
|
||||
}
|
||||
|
||||
/**
|
||||
* scmi_devm_register_notifier() - Managed registration of a notifier_block
|
||||
* for an event
|
||||
* @sdev: A reference to an scmi_device whose embedded struct device is to
|
||||
* be used for devres accounting.
|
||||
* @proto_id: Protocol ID
|
||||
* @evt_id: Event ID
|
||||
* @src_id: Source ID, when NULL register for events coming form ALL possible
|
||||
* sources
|
||||
* @nb: A standard notifier block to register for the specified event
|
||||
*
|
||||
* Generic devres managed helper to register a notifier_block against a
|
||||
* protocol event.
|
||||
*/
|
||||
static int scmi_devm_register_notifier(struct scmi_device *sdev,
|
||||
u8 proto_id, u8 evt_id, u32 *src_id,
|
||||
struct notifier_block *nb)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_notifier_devres *dres;
|
||||
|
||||
dres = devres_alloc(scmi_devm_release_notifier,
|
||||
sizeof(*dres), GFP_KERNEL);
|
||||
if (!dres)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = scmi_register_notifier(sdev->handle, proto_id,
|
||||
evt_id, src_id, nb);
|
||||
if (ret) {
|
||||
devres_free(dres);
|
||||
return ret;
|
||||
}
|
||||
|
||||
dres->handle = sdev->handle;
|
||||
dres->proto_id = proto_id;
|
||||
dres->evt_id = evt_id;
|
||||
dres->nb = nb;
|
||||
if (src_id) {
|
||||
dres->__src_id = *src_id;
|
||||
dres->src_id = &dres->__src_id;
|
||||
} else {
|
||||
dres->src_id = NULL;
|
||||
}
|
||||
devres_add(&sdev->dev, dres);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_devm_notifier_match(struct device *dev, void *res, void *data)
|
||||
{
|
||||
struct scmi_notifier_devres *dres = res;
|
||||
struct scmi_notifier_devres *xres = data;
|
||||
|
||||
if (WARN_ON(!dres || !xres))
|
||||
return 0;
|
||||
|
||||
return dres->proto_id == xres->proto_id &&
|
||||
dres->evt_id == xres->evt_id &&
|
||||
dres->nb == xres->nb &&
|
||||
((!dres->src_id && !xres->src_id) ||
|
||||
(dres->src_id && xres->src_id &&
|
||||
dres->__src_id == xres->__src_id));
|
||||
}
|
||||
|
||||
/**
|
||||
* scmi_devm_unregister_notifier() - Managed un-registration of a
|
||||
* notifier_block for an event
|
||||
* @sdev: A reference to an scmi_device whose embedded struct device is to
|
||||
* be used for devres accounting.
|
||||
* @proto_id: Protocol ID
|
||||
* @evt_id: Event ID
|
||||
* @src_id: Source ID, when NULL register for events coming form ALL possible
|
||||
* sources
|
||||
* @nb: A standard notifier block to register for the specified event
|
||||
*
|
||||
* Generic devres managed helper to explicitly un-register a notifier_block
|
||||
* against a protocol event, which was previously registered using the above
|
||||
* @scmi_devm_register_notifier.
|
||||
*/
|
||||
static int scmi_devm_unregister_notifier(struct scmi_device *sdev,
|
||||
u8 proto_id, u8 evt_id, u32 *src_id,
|
||||
struct notifier_block *nb)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_notifier_devres dres;
|
||||
|
||||
dres.handle = sdev->handle;
|
||||
dres.proto_id = proto_id;
|
||||
dres.evt_id = evt_id;
|
||||
if (src_id) {
|
||||
dres.__src_id = *src_id;
|
||||
dres.src_id = &dres.__src_id;
|
||||
} else {
|
||||
dres.src_id = NULL;
|
||||
}
|
||||
|
||||
ret = devres_release(&sdev->dev, scmi_devm_release_notifier,
|
||||
scmi_devm_notifier_match, &dres);
|
||||
|
||||
WARN_ON(ret);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* scmi_protocols_late_init() - Worker for late initialization
|
||||
* @work: The work item to use associated to the proper SCMI instance
|
||||
@ -1428,6 +1610,8 @@ static void scmi_protocols_late_init(struct work_struct *work)
|
||||
* directly from an scmi_driver to register its own notifiers.
|
||||
*/
|
||||
static const struct scmi_notify_ops notify_ops = {
|
||||
.devm_register_event_notifier = scmi_devm_register_notifier,
|
||||
.devm_unregister_event_notifier = scmi_devm_unregister_notifier,
|
||||
.register_event_notifier = scmi_register_notifier,
|
||||
.unregister_event_notifier = scmi_unregister_notifier,
|
||||
};
|
||||
@ -1490,8 +1674,8 @@ int scmi_notification_init(struct scmi_handle *handle)
|
||||
|
||||
INIT_WORK(&ni->init_work, scmi_protocols_late_init);
|
||||
|
||||
scmi_set_notification_instance_data(handle, ni);
|
||||
handle->notify_ops = ¬ify_ops;
|
||||
handle->notify_priv = ni;
|
||||
/* Ensure handle is up to date */
|
||||
smp_wmb();
|
||||
|
||||
@ -1503,7 +1687,7 @@ int scmi_notification_init(struct scmi_handle *handle)
|
||||
|
||||
err:
|
||||
dev_warn(handle->dev, "Initialization Failed.\n");
|
||||
devres_release_group(handle->dev, NULL);
|
||||
devres_release_group(handle->dev, gid);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
@ -1515,15 +1699,10 @@ void scmi_notification_exit(struct scmi_handle *handle)
|
||||
{
|
||||
struct scmi_notify_instance *ni;
|
||||
|
||||
/* Ensure notify_priv is updated */
|
||||
smp_rmb();
|
||||
if (!handle->notify_priv)
|
||||
ni = scmi_get_notification_instance_data(handle);
|
||||
if (!ni)
|
||||
return;
|
||||
ni = handle->notify_priv;
|
||||
|
||||
handle->notify_priv = NULL;
|
||||
/* Ensure handle is up to date */
|
||||
smp_wmb();
|
||||
scmi_set_notification_instance_data(handle, NULL);
|
||||
|
||||
/* Destroy while letting pending work complete */
|
||||
destroy_workqueue(ni->notify_wq);
|
||||
|
@ -31,8 +31,12 @@ struct scmi_event {
|
||||
size_t max_report_sz;
|
||||
};
|
||||
|
||||
struct scmi_protocol_handle;
|
||||
|
||||
/**
|
||||
* struct scmi_event_ops - Protocol helpers called by the notification core.
|
||||
* @get_num_sources: Returns the number of possible events' sources for this
|
||||
* protocol
|
||||
* @set_notify_enabled: Enable/disable the required evt_id/src_id notifications
|
||||
* using the proper custom protocol commands.
|
||||
* Return 0 on Success
|
||||
@ -46,22 +50,42 @@ struct scmi_event {
|
||||
* process context.
|
||||
*/
|
||||
struct scmi_event_ops {
|
||||
int (*set_notify_enabled)(const struct scmi_handle *handle,
|
||||
int (*get_num_sources)(const struct scmi_protocol_handle *ph);
|
||||
int (*set_notify_enabled)(const struct scmi_protocol_handle *ph,
|
||||
u8 evt_id, u32 src_id, bool enabled);
|
||||
void *(*fill_custom_report)(const struct scmi_handle *handle,
|
||||
void *(*fill_custom_report)(const struct scmi_protocol_handle *ph,
|
||||
u8 evt_id, ktime_t timestamp,
|
||||
const void *payld, size_t payld_sz,
|
||||
void *report, u32 *src_id);
|
||||
};
|
||||
|
||||
/**
|
||||
* struct scmi_protocol_events - Per-protocol description of available events
|
||||
* @queue_sz: Size in bytes of the per-protocol queue to use.
|
||||
* @ops: Array of protocol-specific events operations.
|
||||
* @evts: Array of supported protocol's events.
|
||||
* @num_events: Number of supported protocol's events described in @evts.
|
||||
* @num_sources: Number of protocol's sources, should be greater than 0; if not
|
||||
* available at compile time, it will be provided at run-time via
|
||||
* @get_num_sources.
|
||||
*/
|
||||
struct scmi_protocol_events {
|
||||
size_t queue_sz;
|
||||
const struct scmi_event_ops *ops;
|
||||
const struct scmi_event *evts;
|
||||
unsigned int num_events;
|
||||
unsigned int num_sources;
|
||||
};
|
||||
|
||||
int scmi_notification_init(struct scmi_handle *handle);
|
||||
void scmi_notification_exit(struct scmi_handle *handle);
|
||||
|
||||
int scmi_register_protocol_events(const struct scmi_handle *handle,
|
||||
u8 proto_id, size_t queue_sz,
|
||||
const struct scmi_event_ops *ops,
|
||||
const struct scmi_event *evt, int num_events,
|
||||
int num_sources);
|
||||
struct scmi_protocol_handle;
|
||||
int scmi_register_protocol_events(const struct scmi_handle *handle, u8 proto_id,
|
||||
const struct scmi_protocol_handle *ph,
|
||||
const struct scmi_protocol_events *ee);
|
||||
void scmi_deregister_protocol_events(const struct scmi_handle *handle,
|
||||
u8 proto_id);
|
||||
int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
|
||||
const void *buf, size_t len, ktime_t ts);
|
||||
|
||||
|
@ -2,7 +2,7 @@
|
||||
/*
|
||||
* System Control and Management Interface (SCMI) Performance Protocol
|
||||
*
|
||||
* Copyright (C) 2018 ARM Ltd.
|
||||
* Copyright (C) 2018-2020 ARM Ltd.
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) "SCMI Notifications PERF - " fmt
|
||||
@ -11,6 +11,7 @@
|
||||
#include <linux/of.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/io-64-nonatomic-hi-lo.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/pm_opp.h>
|
||||
#include <linux/scmi_protocol.h>
|
||||
@ -175,21 +176,21 @@ static enum scmi_performance_protocol_cmd evt_2_cmd[] = {
|
||||
PERF_NOTIFY_LEVEL,
|
||||
};
|
||||
|
||||
static int scmi_perf_attributes_get(const struct scmi_handle *handle,
|
||||
static int scmi_perf_attributes_get(const struct scmi_protocol_handle *ph,
|
||||
struct scmi_perf_info *pi)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_msg_resp_perf_attributes *attr;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
|
||||
SCMI_PROTOCOL_PERF, 0, sizeof(*attr), &t);
|
||||
ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES, 0,
|
||||
sizeof(*attr), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
attr = t->rx.buf;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret) {
|
||||
u16 flags = le16_to_cpu(attr->flags);
|
||||
|
||||
@ -200,28 +201,27 @@ static int scmi_perf_attributes_get(const struct scmi_handle *handle,
|
||||
pi->stats_size = le32_to_cpu(attr->stats_size);
|
||||
}
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
|
||||
struct perf_dom_info *dom_info)
|
||||
scmi_perf_domain_attributes_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, struct perf_dom_info *dom_info)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_msg_resp_perf_domain_attributes *attr;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, PERF_DOMAIN_ATTRIBUTES,
|
||||
SCMI_PROTOCOL_PERF, sizeof(domain),
|
||||
sizeof(*attr), &t);
|
||||
ret = ph->xops->xfer_get_init(ph, PERF_DOMAIN_ATTRIBUTES,
|
||||
sizeof(domain), sizeof(*attr), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
put_unaligned_le32(domain, t->tx.buf);
|
||||
attr = t->rx.buf;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret) {
|
||||
u32 flags = le32_to_cpu(attr->flags);
|
||||
|
||||
@ -245,7 +245,7 @@ scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
|
||||
strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
|
||||
}
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -257,7 +257,7 @@ static int opp_cmp_func(const void *opp1, const void *opp2)
|
||||
}
|
||||
|
||||
static int
|
||||
scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
|
||||
scmi_perf_describe_levels_get(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
struct perf_dom_info *perf_dom)
|
||||
{
|
||||
int ret, cnt;
|
||||
@ -268,8 +268,8 @@ scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
|
||||
struct scmi_msg_perf_describe_levels *dom_info;
|
||||
struct scmi_msg_resp_perf_describe_levels *level_info;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, PERF_DESCRIBE_LEVELS,
|
||||
SCMI_PROTOCOL_PERF, sizeof(*dom_info), 0, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, PERF_DESCRIBE_LEVELS,
|
||||
sizeof(*dom_info), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -281,14 +281,14 @@ scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
|
||||
/* Set the number of OPPs to be skipped/already read */
|
||||
dom_info->level_index = cpu_to_le32(tot_opp_cnt);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (ret)
|
||||
break;
|
||||
|
||||
num_returned = le16_to_cpu(level_info->num_returned);
|
||||
num_remaining = le16_to_cpu(level_info->num_remaining);
|
||||
if (tot_opp_cnt + num_returned > MAX_OPPS) {
|
||||
dev_err(handle->dev, "No. of OPPs exceeded MAX_OPPS");
|
||||
dev_err(ph->dev, "No. of OPPs exceeded MAX_OPPS");
|
||||
break;
|
||||
}
|
||||
|
||||
@ -299,13 +299,13 @@ scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
|
||||
opp->trans_latency_us = le16_to_cpu
|
||||
(level_info->opp[cnt].transition_latency_us);
|
||||
|
||||
dev_dbg(handle->dev, "Level %d Power %d Latency %dus\n",
|
||||
dev_dbg(ph->dev, "Level %d Power %d Latency %dus\n",
|
||||
opp->perf, opp->power, opp->trans_latency_us);
|
||||
}
|
||||
|
||||
tot_opp_cnt += num_returned;
|
||||
|
||||
scmi_reset_rx_to_maxsz(handle, t);
|
||||
ph->xops->reset_rx_to_maxsz(ph, t);
|
||||
/*
|
||||
* check for both returned and remaining to avoid infinite
|
||||
* loop due to buggy firmware
|
||||
@ -313,7 +313,7 @@ scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
|
||||
} while (num_returned && num_remaining);
|
||||
|
||||
perf_dom->opp_count = tot_opp_cnt;
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
|
||||
sort(perf_dom->opp, tot_opp_cnt, sizeof(*opp), opp_cmp_func, NULL);
|
||||
return ret;
|
||||
@ -353,14 +353,14 @@ static void scmi_perf_fc_ring_db(struct scmi_fc_db_info *db)
|
||||
#endif
|
||||
}
|
||||
|
||||
static int scmi_perf_mb_limits_set(const struct scmi_handle *handle, u32 domain,
|
||||
u32 max_perf, u32 min_perf)
|
||||
static int scmi_perf_mb_limits_set(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, u32 max_perf, u32 min_perf)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_perf_set_limits *limits;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, PERF_LIMITS_SET, SCMI_PROTOCOL_PERF,
|
||||
ret = ph->xops->xfer_get_init(ph, PERF_LIMITS_SET,
|
||||
sizeof(*limits), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
@ -370,16 +370,16 @@ static int scmi_perf_mb_limits_set(const struct scmi_handle *handle, u32 domain,
|
||||
limits->max_level = cpu_to_le32(max_perf);
|
||||
limits->min_level = cpu_to_le32(min_perf);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain,
|
||||
u32 max_perf, u32 min_perf)
|
||||
static int scmi_perf_limits_set(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, u32 max_perf, u32 min_perf)
|
||||
{
|
||||
struct scmi_perf_info *pi = handle->perf_priv;
|
||||
struct scmi_perf_info *pi = ph->get_priv(ph);
|
||||
struct perf_dom_info *dom = pi->dom_info + domain;
|
||||
|
||||
if (dom->fc_info && dom->fc_info->limit_set_addr) {
|
||||
@ -389,24 +389,24 @@ static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain,
|
||||
return 0;
|
||||
}
|
||||
|
||||
return scmi_perf_mb_limits_set(handle, domain, max_perf, min_perf);
|
||||
return scmi_perf_mb_limits_set(ph, domain, max_perf, min_perf);
|
||||
}
|
||||
|
||||
static int scmi_perf_mb_limits_get(const struct scmi_handle *handle, u32 domain,
|
||||
u32 *max_perf, u32 *min_perf)
|
||||
static int scmi_perf_mb_limits_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, u32 *max_perf, u32 *min_perf)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_perf_get_limits *limits;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, PERF_LIMITS_GET, SCMI_PROTOCOL_PERF,
|
||||
ret = ph->xops->xfer_get_init(ph, PERF_LIMITS_GET,
|
||||
sizeof(__le32), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
put_unaligned_le32(domain, t->tx.buf);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret) {
|
||||
limits = t->rx.buf;
|
||||
|
||||
@ -414,14 +414,14 @@ static int scmi_perf_mb_limits_get(const struct scmi_handle *handle, u32 domain,
|
||||
*min_perf = le32_to_cpu(limits->min_level);
|
||||
}
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain,
|
||||
u32 *max_perf, u32 *min_perf)
|
||||
static int scmi_perf_limits_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, u32 *max_perf, u32 *min_perf)
|
||||
{
|
||||
struct scmi_perf_info *pi = handle->perf_priv;
|
||||
struct scmi_perf_info *pi = ph->get_priv(ph);
|
||||
struct perf_dom_info *dom = pi->dom_info + domain;
|
||||
|
||||
if (dom->fc_info && dom->fc_info->limit_get_addr) {
|
||||
@ -430,18 +430,17 @@ static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain,
|
||||
return 0;
|
||||
}
|
||||
|
||||
return scmi_perf_mb_limits_get(handle, domain, max_perf, min_perf);
|
||||
return scmi_perf_mb_limits_get(ph, domain, max_perf, min_perf);
|
||||
}
|
||||
|
||||
static int scmi_perf_mb_level_set(const struct scmi_handle *handle, u32 domain,
|
||||
u32 level, bool poll)
|
||||
static int scmi_perf_mb_level_set(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, u32 level, bool poll)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_perf_set_level *lvl;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, PERF_LEVEL_SET, SCMI_PROTOCOL_PERF,
|
||||
sizeof(*lvl), 0, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, PERF_LEVEL_SET, sizeof(*lvl), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -450,16 +449,16 @@ static int scmi_perf_mb_level_set(const struct scmi_handle *handle, u32 domain,
|
||||
lvl->domain = cpu_to_le32(domain);
|
||||
lvl->level = cpu_to_le32(level);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_perf_level_set(const struct scmi_handle *handle, u32 domain,
|
||||
u32 level, bool poll)
|
||||
static int scmi_perf_level_set(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, u32 level, bool poll)
|
||||
{
|
||||
struct scmi_perf_info *pi = handle->perf_priv;
|
||||
struct scmi_perf_info *pi = ph->get_priv(ph);
|
||||
struct perf_dom_info *dom = pi->dom_info + domain;
|
||||
|
||||
if (dom->fc_info && dom->fc_info->level_set_addr) {
|
||||
@ -468,16 +467,16 @@ static int scmi_perf_level_set(const struct scmi_handle *handle, u32 domain,
|
||||
return 0;
|
||||
}
|
||||
|
||||
return scmi_perf_mb_level_set(handle, domain, level, poll);
|
||||
return scmi_perf_mb_level_set(ph, domain, level, poll);
|
||||
}
|
||||
|
||||
static int scmi_perf_mb_level_get(const struct scmi_handle *handle, u32 domain,
|
||||
u32 *level, bool poll)
|
||||
static int scmi_perf_mb_level_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, u32 *level, bool poll)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, PERF_LEVEL_GET, SCMI_PROTOCOL_PERF,
|
||||
ret = ph->xops->xfer_get_init(ph, PERF_LEVEL_GET,
|
||||
sizeof(u32), sizeof(u32), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
@ -485,18 +484,18 @@ static int scmi_perf_mb_level_get(const struct scmi_handle *handle, u32 domain,
|
||||
t->hdr.poll_completion = poll;
|
||||
put_unaligned_le32(domain, t->tx.buf);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret)
|
||||
*level = get_unaligned_le32(t->rx.buf);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain,
|
||||
u32 *level, bool poll)
|
||||
static int scmi_perf_level_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, u32 *level, bool poll)
|
||||
{
|
||||
struct scmi_perf_info *pi = handle->perf_priv;
|
||||
struct scmi_perf_info *pi = ph->get_priv(ph);
|
||||
struct perf_dom_info *dom = pi->dom_info + domain;
|
||||
|
||||
if (dom->fc_info && dom->fc_info->level_get_addr) {
|
||||
@ -504,10 +503,10 @@ static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain,
|
||||
return 0;
|
||||
}
|
||||
|
||||
return scmi_perf_mb_level_get(handle, domain, level, poll);
|
||||
return scmi_perf_mb_level_get(ph, domain, level, poll);
|
||||
}
|
||||
|
||||
static int scmi_perf_level_limits_notify(const struct scmi_handle *handle,
|
||||
static int scmi_perf_level_limits_notify(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, int message_id,
|
||||
bool enable)
|
||||
{
|
||||
@ -515,8 +514,7 @@ static int scmi_perf_level_limits_notify(const struct scmi_handle *handle,
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_perf_notify_level_or_limits *notify;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, message_id, SCMI_PROTOCOL_PERF,
|
||||
sizeof(*notify), 0, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, message_id, sizeof(*notify), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -524,9 +522,9 @@ static int scmi_perf_level_limits_notify(const struct scmi_handle *handle,
|
||||
notify->domain = cpu_to_le32(domain);
|
||||
notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -540,7 +538,7 @@ static bool scmi_perf_fc_size_is_valid(u32 msg, u32 size)
|
||||
}
|
||||
|
||||
static void
|
||||
scmi_perf_domain_desc_fc(const struct scmi_handle *handle, u32 domain,
|
||||
scmi_perf_domain_desc_fc(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
u32 message_id, void __iomem **p_addr,
|
||||
struct scmi_fc_db_info **p_db)
|
||||
{
|
||||
@ -557,8 +555,7 @@ scmi_perf_domain_desc_fc(const struct scmi_handle *handle, u32 domain,
|
||||
if (!p_addr)
|
||||
return;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, PERF_DESCRIBE_FASTCHANNEL,
|
||||
SCMI_PROTOCOL_PERF,
|
||||
ret = ph->xops->xfer_get_init(ph, PERF_DESCRIBE_FASTCHANNEL,
|
||||
sizeof(*info), sizeof(*resp), &t);
|
||||
if (ret)
|
||||
return;
|
||||
@ -567,7 +564,7 @@ scmi_perf_domain_desc_fc(const struct scmi_handle *handle, u32 domain,
|
||||
info->domain = cpu_to_le32(domain);
|
||||
info->message_id = cpu_to_le32(message_id);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (ret)
|
||||
goto err_xfer;
|
||||
|
||||
@ -579,20 +576,20 @@ scmi_perf_domain_desc_fc(const struct scmi_handle *handle, u32 domain,
|
||||
|
||||
phys_addr = le32_to_cpu(resp->chan_addr_low);
|
||||
phys_addr |= (u64)le32_to_cpu(resp->chan_addr_high) << 32;
|
||||
addr = devm_ioremap(handle->dev, phys_addr, size);
|
||||
addr = devm_ioremap(ph->dev, phys_addr, size);
|
||||
if (!addr)
|
||||
goto err_xfer;
|
||||
*p_addr = addr;
|
||||
|
||||
if (p_db && SUPPORTS_DOORBELL(flags)) {
|
||||
db = devm_kzalloc(handle->dev, sizeof(*db), GFP_KERNEL);
|
||||
db = devm_kzalloc(ph->dev, sizeof(*db), GFP_KERNEL);
|
||||
if (!db)
|
||||
goto err_xfer;
|
||||
|
||||
size = 1 << DOORBELL_REG_WIDTH(flags);
|
||||
phys_addr = le32_to_cpu(resp->db_addr_low);
|
||||
phys_addr |= (u64)le32_to_cpu(resp->db_addr_high) << 32;
|
||||
addr = devm_ioremap(handle->dev, phys_addr, size);
|
||||
addr = devm_ioremap(ph->dev, phys_addr, size);
|
||||
if (!addr)
|
||||
goto err_xfer;
|
||||
|
||||
@ -605,25 +602,25 @@ scmi_perf_domain_desc_fc(const struct scmi_handle *handle, u32 domain,
|
||||
*p_db = db;
|
||||
}
|
||||
err_xfer:
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
}
|
||||
|
||||
static void scmi_perf_domain_init_fc(const struct scmi_handle *handle,
|
||||
static void scmi_perf_domain_init_fc(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, struct scmi_fc_info **p_fc)
|
||||
{
|
||||
struct scmi_fc_info *fc;
|
||||
|
||||
fc = devm_kzalloc(handle->dev, sizeof(*fc), GFP_KERNEL);
|
||||
fc = devm_kzalloc(ph->dev, sizeof(*fc), GFP_KERNEL);
|
||||
if (!fc)
|
||||
return;
|
||||
|
||||
scmi_perf_domain_desc_fc(handle, domain, PERF_LEVEL_SET,
|
||||
scmi_perf_domain_desc_fc(ph, domain, PERF_LEVEL_SET,
|
||||
&fc->level_set_addr, &fc->level_set_db);
|
||||
scmi_perf_domain_desc_fc(handle, domain, PERF_LEVEL_GET,
|
||||
scmi_perf_domain_desc_fc(ph, domain, PERF_LEVEL_GET,
|
||||
&fc->level_get_addr, NULL);
|
||||
scmi_perf_domain_desc_fc(handle, domain, PERF_LIMITS_SET,
|
||||
scmi_perf_domain_desc_fc(ph, domain, PERF_LIMITS_SET,
|
||||
&fc->limit_set_addr, &fc->limit_set_db);
|
||||
scmi_perf_domain_desc_fc(handle, domain, PERF_LIMITS_GET,
|
||||
scmi_perf_domain_desc_fc(ph, domain, PERF_LIMITS_GET,
|
||||
&fc->limit_get_addr, NULL);
|
||||
*p_fc = fc;
|
||||
}
|
||||
@ -640,14 +637,14 @@ static int scmi_dev_domain_id(struct device *dev)
|
||||
return clkspec.args[0];
|
||||
}
|
||||
|
||||
static int scmi_dvfs_device_opps_add(const struct scmi_handle *handle,
|
||||
static int scmi_dvfs_device_opps_add(const struct scmi_protocol_handle *ph,
|
||||
struct device *dev)
|
||||
{
|
||||
int idx, ret, domain;
|
||||
unsigned long freq;
|
||||
struct scmi_opp *opp;
|
||||
struct perf_dom_info *dom;
|
||||
struct scmi_perf_info *pi = handle->perf_priv;
|
||||
struct scmi_perf_info *pi = ph->get_priv(ph);
|
||||
|
||||
domain = scmi_dev_domain_id(dev);
|
||||
if (domain < 0)
|
||||
@ -672,11 +669,12 @@ static int scmi_dvfs_device_opps_add(const struct scmi_handle *handle,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int scmi_dvfs_transition_latency_get(const struct scmi_handle *handle,
|
||||
static int
|
||||
scmi_dvfs_transition_latency_get(const struct scmi_protocol_handle *ph,
|
||||
struct device *dev)
|
||||
{
|
||||
struct perf_dom_info *dom;
|
||||
struct scmi_perf_info *pi = handle->perf_priv;
|
||||
struct scmi_perf_info *pi = ph->get_priv(ph);
|
||||
int domain = scmi_dev_domain_id(dev);
|
||||
|
||||
if (domain < 0)
|
||||
@ -687,35 +685,35 @@ static int scmi_dvfs_transition_latency_get(const struct scmi_handle *handle,
|
||||
return dom->opp[dom->opp_count - 1].trans_latency_us * 1000;
|
||||
}
|
||||
|
||||
static int scmi_dvfs_freq_set(const struct scmi_handle *handle, u32 domain,
|
||||
static int scmi_dvfs_freq_set(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
unsigned long freq, bool poll)
|
||||
{
|
||||
struct scmi_perf_info *pi = handle->perf_priv;
|
||||
struct scmi_perf_info *pi = ph->get_priv(ph);
|
||||
struct perf_dom_info *dom = pi->dom_info + domain;
|
||||
|
||||
return scmi_perf_level_set(handle, domain, freq / dom->mult_factor,
|
||||
poll);
|
||||
return scmi_perf_level_set(ph, domain, freq / dom->mult_factor, poll);
|
||||
}
|
||||
|
||||
static int scmi_dvfs_freq_get(const struct scmi_handle *handle, u32 domain,
|
||||
static int scmi_dvfs_freq_get(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
unsigned long *freq, bool poll)
|
||||
{
|
||||
int ret;
|
||||
u32 level;
|
||||
struct scmi_perf_info *pi = handle->perf_priv;
|
||||
struct scmi_perf_info *pi = ph->get_priv(ph);
|
||||
struct perf_dom_info *dom = pi->dom_info + domain;
|
||||
|
||||
ret = scmi_perf_level_get(handle, domain, &level, poll);
|
||||
ret = scmi_perf_level_get(ph, domain, &level, poll);
|
||||
if (!ret)
|
||||
*freq = level * dom->mult_factor;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_dvfs_est_power_get(const struct scmi_handle *handle, u32 domain,
|
||||
unsigned long *freq, unsigned long *power)
|
||||
static int scmi_dvfs_est_power_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, unsigned long *freq,
|
||||
unsigned long *power)
|
||||
{
|
||||
struct scmi_perf_info *pi = handle->perf_priv;
|
||||
struct scmi_perf_info *pi = ph->get_priv(ph);
|
||||
struct perf_dom_info *dom;
|
||||
unsigned long opp_freq;
|
||||
int idx, ret = -EINVAL;
|
||||
@ -739,18 +737,25 @@ static int scmi_dvfs_est_power_get(const struct scmi_handle *handle, u32 domain,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static bool scmi_fast_switch_possible(const struct scmi_handle *handle,
|
||||
static bool scmi_fast_switch_possible(const struct scmi_protocol_handle *ph,
|
||||
struct device *dev)
|
||||
{
|
||||
struct perf_dom_info *dom;
|
||||
struct scmi_perf_info *pi = handle->perf_priv;
|
||||
struct scmi_perf_info *pi = ph->get_priv(ph);
|
||||
|
||||
dom = pi->dom_info + scmi_dev_domain_id(dev);
|
||||
|
||||
return dom->fc_info && dom->fc_info->level_set_addr;
|
||||
}
|
||||
|
||||
static const struct scmi_perf_ops perf_ops = {
|
||||
static bool scmi_power_scale_mw_get(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
struct scmi_perf_info *pi = ph->get_priv(ph);
|
||||
|
||||
return pi->power_scale_mw;
|
||||
}
|
||||
|
||||
static const struct scmi_perf_proto_ops perf_proto_ops = {
|
||||
.limits_set = scmi_perf_limits_set,
|
||||
.limits_get = scmi_perf_limits_get,
|
||||
.level_set = scmi_perf_level_set,
|
||||
@ -762,9 +767,10 @@ static const struct scmi_perf_ops perf_ops = {
|
||||
.freq_get = scmi_dvfs_freq_get,
|
||||
.est_power_get = scmi_dvfs_est_power_get,
|
||||
.fast_switch_possible = scmi_fast_switch_possible,
|
||||
.power_scale_mw_get = scmi_power_scale_mw_get,
|
||||
};
|
||||
|
||||
static int scmi_perf_set_notify_enabled(const struct scmi_handle *handle,
|
||||
static int scmi_perf_set_notify_enabled(const struct scmi_protocol_handle *ph,
|
||||
u8 evt_id, u32 src_id, bool enable)
|
||||
{
|
||||
int ret, cmd_id;
|
||||
@ -773,7 +779,7 @@ static int scmi_perf_set_notify_enabled(const struct scmi_handle *handle,
|
||||
return -EINVAL;
|
||||
|
||||
cmd_id = evt_2_cmd[evt_id];
|
||||
ret = scmi_perf_level_limits_notify(handle, src_id, cmd_id, enable);
|
||||
ret = scmi_perf_level_limits_notify(ph, src_id, cmd_id, enable);
|
||||
if (ret)
|
||||
pr_debug("FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n",
|
||||
evt_id, src_id, ret);
|
||||
@ -781,7 +787,7 @@ static int scmi_perf_set_notify_enabled(const struct scmi_handle *handle,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void *scmi_perf_fill_custom_report(const struct scmi_handle *handle,
|
||||
static void *scmi_perf_fill_custom_report(const struct scmi_protocol_handle *ph,
|
||||
u8 evt_id, ktime_t timestamp,
|
||||
const void *payld, size_t payld_sz,
|
||||
void *report, u32 *src_id)
|
||||
@ -829,6 +835,16 @@ static void *scmi_perf_fill_custom_report(const struct scmi_handle *handle,
|
||||
return rep;
|
||||
}
|
||||
|
||||
static int scmi_perf_get_num_sources(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
struct scmi_perf_info *pi = ph->get_priv(ph);
|
||||
|
||||
if (!pi)
|
||||
return -EINVAL;
|
||||
|
||||
return pi->num_domains;
|
||||
}
|
||||
|
||||
static const struct scmi_event perf_events[] = {
|
||||
{
|
||||
.id = SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED,
|
||||
@ -843,28 +859,36 @@ static const struct scmi_event perf_events[] = {
|
||||
};
|
||||
|
||||
static const struct scmi_event_ops perf_event_ops = {
|
||||
.get_num_sources = scmi_perf_get_num_sources,
|
||||
.set_notify_enabled = scmi_perf_set_notify_enabled,
|
||||
.fill_custom_report = scmi_perf_fill_custom_report,
|
||||
};
|
||||
|
||||
static int scmi_perf_protocol_init(struct scmi_handle *handle)
|
||||
static const struct scmi_protocol_events perf_protocol_events = {
|
||||
.queue_sz = SCMI_PROTO_QUEUE_SZ,
|
||||
.ops = &perf_event_ops,
|
||||
.evts = perf_events,
|
||||
.num_events = ARRAY_SIZE(perf_events),
|
||||
};
|
||||
|
||||
static int scmi_perf_protocol_init(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
int domain;
|
||||
u32 version;
|
||||
struct scmi_perf_info *pinfo;
|
||||
|
||||
scmi_version_get(handle, SCMI_PROTOCOL_PERF, &version);
|
||||
ph->xops->version_get(ph, &version);
|
||||
|
||||
dev_dbg(handle->dev, "Performance Version %d.%d\n",
|
||||
dev_dbg(ph->dev, "Performance Version %d.%d\n",
|
||||
PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
|
||||
|
||||
pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL);
|
||||
pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL);
|
||||
if (!pinfo)
|
||||
return -ENOMEM;
|
||||
|
||||
scmi_perf_attributes_get(handle, pinfo);
|
||||
scmi_perf_attributes_get(ph, pinfo);
|
||||
|
||||
pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains,
|
||||
pinfo->dom_info = devm_kcalloc(ph->dev, pinfo->num_domains,
|
||||
sizeof(*pinfo->dom_info), GFP_KERNEL);
|
||||
if (!pinfo->dom_info)
|
||||
return -ENOMEM;
|
||||
@ -872,24 +896,24 @@ static int scmi_perf_protocol_init(struct scmi_handle *handle)
|
||||
for (domain = 0; domain < pinfo->num_domains; domain++) {
|
||||
struct perf_dom_info *dom = pinfo->dom_info + domain;
|
||||
|
||||
scmi_perf_domain_attributes_get(handle, domain, dom);
|
||||
scmi_perf_describe_levels_get(handle, domain, dom);
|
||||
scmi_perf_domain_attributes_get(ph, domain, dom);
|
||||
scmi_perf_describe_levels_get(ph, domain, dom);
|
||||
|
||||
if (dom->perf_fastchannels)
|
||||
scmi_perf_domain_init_fc(handle, domain, &dom->fc_info);
|
||||
scmi_perf_domain_init_fc(ph, domain, &dom->fc_info);
|
||||
}
|
||||
|
||||
scmi_register_protocol_events(handle,
|
||||
SCMI_PROTOCOL_PERF, SCMI_PROTO_QUEUE_SZ,
|
||||
&perf_event_ops, perf_events,
|
||||
ARRAY_SIZE(perf_events),
|
||||
pinfo->num_domains);
|
||||
|
||||
pinfo->version = version;
|
||||
handle->perf_ops = &perf_ops;
|
||||
handle->perf_priv = pinfo;
|
||||
|
||||
return 0;
|
||||
return ph->set_priv(ph, pinfo);
|
||||
}
|
||||
|
||||
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_PERF, perf)
|
||||
static const struct scmi_protocol scmi_perf = {
|
||||
.id = SCMI_PROTOCOL_PERF,
|
||||
.owner = THIS_MODULE,
|
||||
.init_instance = &scmi_perf_protocol_init,
|
||||
.ops = &perf_proto_ops,
|
||||
.events = &perf_protocol_events,
|
||||
};
|
||||
|
||||
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(perf, scmi_perf)
|
||||
|
@ -2,11 +2,12 @@
|
||||
/*
|
||||
* System Control and Management Interface (SCMI) Power Protocol
|
||||
*
|
||||
* Copyright (C) 2018 ARM Ltd.
|
||||
* Copyright (C) 2018-2020 ARM Ltd.
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) "SCMI Notifications POWER - " fmt
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/scmi_protocol.h>
|
||||
|
||||
#include "common.h"
|
||||
@ -68,21 +69,21 @@ struct scmi_power_info {
|
||||
struct power_dom_info *dom_info;
|
||||
};
|
||||
|
||||
static int scmi_power_attributes_get(const struct scmi_handle *handle,
|
||||
static int scmi_power_attributes_get(const struct scmi_protocol_handle *ph,
|
||||
struct scmi_power_info *pi)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_msg_resp_power_attributes *attr;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
|
||||
SCMI_PROTOCOL_POWER, 0, sizeof(*attr), &t);
|
||||
ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES,
|
||||
0, sizeof(*attr), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
attr = t->rx.buf;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret) {
|
||||
pi->num_domains = le16_to_cpu(attr->num_domains);
|
||||
pi->stats_addr = le32_to_cpu(attr->stats_addr_low) |
|
||||
@ -90,28 +91,27 @@ static int scmi_power_attributes_get(const struct scmi_handle *handle,
|
||||
pi->stats_size = le32_to_cpu(attr->stats_size);
|
||||
}
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
scmi_power_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
|
||||
struct power_dom_info *dom_info)
|
||||
scmi_power_domain_attributes_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, struct power_dom_info *dom_info)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_msg_resp_power_domain_attributes *attr;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, POWER_DOMAIN_ATTRIBUTES,
|
||||
SCMI_PROTOCOL_POWER, sizeof(domain),
|
||||
sizeof(*attr), &t);
|
||||
ret = ph->xops->xfer_get_init(ph, POWER_DOMAIN_ATTRIBUTES,
|
||||
sizeof(domain), sizeof(*attr), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
put_unaligned_le32(domain, t->tx.buf);
|
||||
attr = t->rx.buf;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret) {
|
||||
u32 flags = le32_to_cpu(attr->flags);
|
||||
|
||||
@ -121,19 +121,18 @@ scmi_power_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
|
||||
strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
|
||||
}
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
scmi_power_state_set(const struct scmi_handle *handle, u32 domain, u32 state)
|
||||
static int scmi_power_state_set(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, u32 state)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_power_set_state *st;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, POWER_STATE_SET, SCMI_PROTOCOL_POWER,
|
||||
sizeof(*st), 0, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, POWER_STATE_SET, sizeof(*st), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -142,64 +141,64 @@ scmi_power_state_set(const struct scmi_handle *handle, u32 domain, u32 state)
|
||||
st->domain = cpu_to_le32(domain);
|
||||
st->state = cpu_to_le32(state);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
scmi_power_state_get(const struct scmi_handle *handle, u32 domain, u32 *state)
|
||||
static int scmi_power_state_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, u32 *state)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, POWER_STATE_GET, SCMI_PROTOCOL_POWER,
|
||||
sizeof(u32), sizeof(u32), &t);
|
||||
ret = ph->xops->xfer_get_init(ph, POWER_STATE_GET, sizeof(u32), sizeof(u32), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
put_unaligned_le32(domain, t->tx.buf);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret)
|
||||
*state = get_unaligned_le32(t->rx.buf);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_power_num_domains_get(const struct scmi_handle *handle)
|
||||
static int scmi_power_num_domains_get(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
struct scmi_power_info *pi = handle->power_priv;
|
||||
struct scmi_power_info *pi = ph->get_priv(ph);
|
||||
|
||||
return pi->num_domains;
|
||||
}
|
||||
|
||||
static char *scmi_power_name_get(const struct scmi_handle *handle, u32 domain)
|
||||
static char *scmi_power_name_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain)
|
||||
{
|
||||
struct scmi_power_info *pi = handle->power_priv;
|
||||
struct scmi_power_info *pi = ph->get_priv(ph);
|
||||
struct power_dom_info *dom = pi->dom_info + domain;
|
||||
|
||||
return dom->name;
|
||||
}
|
||||
|
||||
static const struct scmi_power_ops power_ops = {
|
||||
static const struct scmi_power_proto_ops power_proto_ops = {
|
||||
.num_domains_get = scmi_power_num_domains_get,
|
||||
.name_get = scmi_power_name_get,
|
||||
.state_set = scmi_power_state_set,
|
||||
.state_get = scmi_power_state_get,
|
||||
};
|
||||
|
||||
static int scmi_power_request_notify(const struct scmi_handle *handle,
|
||||
static int scmi_power_request_notify(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, bool enable)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_power_state_notify *notify;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, POWER_STATE_NOTIFY,
|
||||
SCMI_PROTOCOL_POWER, sizeof(*notify), 0, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, POWER_STATE_NOTIFY,
|
||||
sizeof(*notify), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -207,18 +206,18 @@ static int scmi_power_request_notify(const struct scmi_handle *handle,
|
||||
notify->domain = cpu_to_le32(domain);
|
||||
notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_power_set_notify_enabled(const struct scmi_handle *handle,
|
||||
static int scmi_power_set_notify_enabled(const struct scmi_protocol_handle *ph,
|
||||
u8 evt_id, u32 src_id, bool enable)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = scmi_power_request_notify(handle, src_id, enable);
|
||||
ret = scmi_power_request_notify(ph, src_id, enable);
|
||||
if (ret)
|
||||
pr_debug("FAIL_ENABLE - evt[%X] dom[%d] - ret:%d\n",
|
||||
evt_id, src_id, ret);
|
||||
@ -226,7 +225,8 @@ static int scmi_power_set_notify_enabled(const struct scmi_handle *handle,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void *scmi_power_fill_custom_report(const struct scmi_handle *handle,
|
||||
static void *
|
||||
scmi_power_fill_custom_report(const struct scmi_protocol_handle *ph,
|
||||
u8 evt_id, ktime_t timestamp,
|
||||
const void *payld, size_t payld_sz,
|
||||
void *report, u32 *src_id)
|
||||
@ -246,6 +246,16 @@ static void *scmi_power_fill_custom_report(const struct scmi_handle *handle,
|
||||
return r;
|
||||
}
|
||||
|
||||
static int scmi_power_get_num_sources(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
struct scmi_power_info *pinfo = ph->get_priv(ph);
|
||||
|
||||
if (!pinfo)
|
||||
return -EINVAL;
|
||||
|
||||
return pinfo->num_domains;
|
||||
}
|
||||
|
||||
static const struct scmi_event power_events[] = {
|
||||
{
|
||||
.id = SCMI_EVENT_POWER_STATE_CHANGED,
|
||||
@ -256,28 +266,36 @@ static const struct scmi_event power_events[] = {
|
||||
};
|
||||
|
||||
static const struct scmi_event_ops power_event_ops = {
|
||||
.get_num_sources = scmi_power_get_num_sources,
|
||||
.set_notify_enabled = scmi_power_set_notify_enabled,
|
||||
.fill_custom_report = scmi_power_fill_custom_report,
|
||||
};
|
||||
|
||||
static int scmi_power_protocol_init(struct scmi_handle *handle)
|
||||
static const struct scmi_protocol_events power_protocol_events = {
|
||||
.queue_sz = SCMI_PROTO_QUEUE_SZ,
|
||||
.ops = &power_event_ops,
|
||||
.evts = power_events,
|
||||
.num_events = ARRAY_SIZE(power_events),
|
||||
};
|
||||
|
||||
static int scmi_power_protocol_init(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
int domain;
|
||||
u32 version;
|
||||
struct scmi_power_info *pinfo;
|
||||
|
||||
scmi_version_get(handle, SCMI_PROTOCOL_POWER, &version);
|
||||
ph->xops->version_get(ph, &version);
|
||||
|
||||
dev_dbg(handle->dev, "Power Version %d.%d\n",
|
||||
dev_dbg(ph->dev, "Power Version %d.%d\n",
|
||||
PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
|
||||
|
||||
pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL);
|
||||
pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL);
|
||||
if (!pinfo)
|
||||
return -ENOMEM;
|
||||
|
||||
scmi_power_attributes_get(handle, pinfo);
|
||||
scmi_power_attributes_get(ph, pinfo);
|
||||
|
||||
pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains,
|
||||
pinfo->dom_info = devm_kcalloc(ph->dev, pinfo->num_domains,
|
||||
sizeof(*pinfo->dom_info), GFP_KERNEL);
|
||||
if (!pinfo->dom_info)
|
||||
return -ENOMEM;
|
||||
@ -285,20 +303,20 @@ static int scmi_power_protocol_init(struct scmi_handle *handle)
|
||||
for (domain = 0; domain < pinfo->num_domains; domain++) {
|
||||
struct power_dom_info *dom = pinfo->dom_info + domain;
|
||||
|
||||
scmi_power_domain_attributes_get(handle, domain, dom);
|
||||
scmi_power_domain_attributes_get(ph, domain, dom);
|
||||
}
|
||||
|
||||
scmi_register_protocol_events(handle,
|
||||
SCMI_PROTOCOL_POWER, SCMI_PROTO_QUEUE_SZ,
|
||||
&power_event_ops, power_events,
|
||||
ARRAY_SIZE(power_events),
|
||||
pinfo->num_domains);
|
||||
|
||||
pinfo->version = version;
|
||||
handle->power_ops = &power_ops;
|
||||
handle->power_priv = pinfo;
|
||||
|
||||
return 0;
|
||||
return ph->set_priv(ph, pinfo);
|
||||
}
|
||||
|
||||
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_POWER, power)
|
||||
static const struct scmi_protocol scmi_power = {
|
||||
.id = SCMI_PROTOCOL_POWER,
|
||||
.owner = THIS_MODULE,
|
||||
.init_instance = &scmi_power_protocol_init,
|
||||
.ops = &power_proto_ops,
|
||||
.events = &power_protocol_events,
|
||||
};
|
||||
|
||||
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(power, scmi_power)
|
||||
|
@ -2,11 +2,12 @@
|
||||
/*
|
||||
* System Control and Management Interface (SCMI) Reset Protocol
|
||||
*
|
||||
* Copyright (C) 2019 ARM Ltd.
|
||||
* Copyright (C) 2019-2020 ARM Ltd.
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) "SCMI Notifications RESET - " fmt
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/scmi_protocol.h>
|
||||
|
||||
#include "common.h"
|
||||
@ -64,46 +65,45 @@ struct scmi_reset_info {
|
||||
struct reset_dom_info *dom_info;
|
||||
};
|
||||
|
||||
static int scmi_reset_attributes_get(const struct scmi_handle *handle,
|
||||
static int scmi_reset_attributes_get(const struct scmi_protocol_handle *ph,
|
||||
struct scmi_reset_info *pi)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
u32 attr;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
|
||||
SCMI_PROTOCOL_RESET, 0, sizeof(attr), &t);
|
||||
ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES,
|
||||
0, sizeof(attr), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret) {
|
||||
attr = get_unaligned_le32(t->rx.buf);
|
||||
pi->num_domains = attr & NUM_RESET_DOMAIN_MASK;
|
||||
}
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
scmi_reset_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
|
||||
struct reset_dom_info *dom_info)
|
||||
scmi_reset_domain_attributes_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain, struct reset_dom_info *dom_info)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_msg_resp_reset_domain_attributes *attr;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, RESET_DOMAIN_ATTRIBUTES,
|
||||
SCMI_PROTOCOL_RESET, sizeof(domain),
|
||||
sizeof(*attr), &t);
|
||||
ret = ph->xops->xfer_get_init(ph, RESET_DOMAIN_ATTRIBUTES,
|
||||
sizeof(domain), sizeof(*attr), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
put_unaligned_le32(domain, t->tx.buf);
|
||||
attr = t->rx.buf;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret) {
|
||||
u32 attributes = le32_to_cpu(attr->attributes);
|
||||
|
||||
@ -115,47 +115,49 @@ scmi_reset_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
|
||||
strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
|
||||
}
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_reset_num_domains_get(const struct scmi_handle *handle)
|
||||
static int scmi_reset_num_domains_get(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
struct scmi_reset_info *pi = handle->reset_priv;
|
||||
struct scmi_reset_info *pi = ph->get_priv(ph);
|
||||
|
||||
return pi->num_domains;
|
||||
}
|
||||
|
||||
static char *scmi_reset_name_get(const struct scmi_handle *handle, u32 domain)
|
||||
static char *scmi_reset_name_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain)
|
||||
{
|
||||
struct scmi_reset_info *pi = handle->reset_priv;
|
||||
struct scmi_reset_info *pi = ph->get_priv(ph);
|
||||
|
||||
struct reset_dom_info *dom = pi->dom_info + domain;
|
||||
|
||||
return dom->name;
|
||||
}
|
||||
|
||||
static int scmi_reset_latency_get(const struct scmi_handle *handle, u32 domain)
|
||||
static int scmi_reset_latency_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain)
|
||||
{
|
||||
struct scmi_reset_info *pi = handle->reset_priv;
|
||||
struct scmi_reset_info *pi = ph->get_priv(ph);
|
||||
struct reset_dom_info *dom = pi->dom_info + domain;
|
||||
|
||||
return dom->latency_us;
|
||||
}
|
||||
|
||||
static int scmi_domain_reset(const struct scmi_handle *handle, u32 domain,
|
||||
static int scmi_domain_reset(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
u32 flags, u32 state)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_msg_reset_domain_reset *dom;
|
||||
struct scmi_reset_info *pi = handle->reset_priv;
|
||||
struct scmi_reset_info *pi = ph->get_priv(ph);
|
||||
struct reset_dom_info *rdom = pi->dom_info + domain;
|
||||
|
||||
if (rdom->async_reset)
|
||||
flags |= ASYNCHRONOUS_RESET;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, RESET, SCMI_PROTOCOL_RESET,
|
||||
sizeof(*dom), 0, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, RESET, sizeof(*dom), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -165,34 +167,35 @@ static int scmi_domain_reset(const struct scmi_handle *handle, u32 domain,
|
||||
dom->reset_state = cpu_to_le32(state);
|
||||
|
||||
if (rdom->async_reset)
|
||||
ret = scmi_do_xfer_with_response(handle, t);
|
||||
ret = ph->xops->do_xfer_with_response(ph, t);
|
||||
else
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_reset_domain_reset(const struct scmi_handle *handle, u32 domain)
|
||||
static int scmi_reset_domain_reset(const struct scmi_protocol_handle *ph,
|
||||
u32 domain)
|
||||
{
|
||||
return scmi_domain_reset(handle, domain, AUTONOMOUS_RESET,
|
||||
return scmi_domain_reset(ph, domain, AUTONOMOUS_RESET,
|
||||
ARCH_COLD_RESET);
|
||||
}
|
||||
|
||||
static int
|
||||
scmi_reset_domain_assert(const struct scmi_handle *handle, u32 domain)
|
||||
scmi_reset_domain_assert(const struct scmi_protocol_handle *ph, u32 domain)
|
||||
{
|
||||
return scmi_domain_reset(handle, domain, EXPLICIT_RESET_ASSERT,
|
||||
return scmi_domain_reset(ph, domain, EXPLICIT_RESET_ASSERT,
|
||||
ARCH_COLD_RESET);
|
||||
}
|
||||
|
||||
static int
|
||||
scmi_reset_domain_deassert(const struct scmi_handle *handle, u32 domain)
|
||||
scmi_reset_domain_deassert(const struct scmi_protocol_handle *ph, u32 domain)
|
||||
{
|
||||
return scmi_domain_reset(handle, domain, 0, ARCH_COLD_RESET);
|
||||
return scmi_domain_reset(ph, domain, 0, ARCH_COLD_RESET);
|
||||
}
|
||||
|
||||
static const struct scmi_reset_ops reset_ops = {
|
||||
static const struct scmi_reset_proto_ops reset_proto_ops = {
|
||||
.num_domains_get = scmi_reset_num_domains_get,
|
||||
.name_get = scmi_reset_name_get,
|
||||
.latency_get = scmi_reset_latency_get,
|
||||
@ -201,16 +204,15 @@ static const struct scmi_reset_ops reset_ops = {
|
||||
.deassert = scmi_reset_domain_deassert,
|
||||
};
|
||||
|
||||
static int scmi_reset_notify(const struct scmi_handle *handle, u32 domain_id,
|
||||
bool enable)
|
||||
static int scmi_reset_notify(const struct scmi_protocol_handle *ph,
|
||||
u32 domain_id, bool enable)
|
||||
{
|
||||
int ret;
|
||||
u32 evt_cntl = enable ? RESET_TP_NOTIFY_ALL : 0;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_msg_reset_notify *cfg;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, RESET_NOTIFY,
|
||||
SCMI_PROTOCOL_RESET, sizeof(*cfg), 0, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, RESET_NOTIFY, sizeof(*cfg), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -218,18 +220,18 @@ static int scmi_reset_notify(const struct scmi_handle *handle, u32 domain_id,
|
||||
cfg->id = cpu_to_le32(domain_id);
|
||||
cfg->event_control = cpu_to_le32(evt_cntl);
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_reset_set_notify_enabled(const struct scmi_handle *handle,
|
||||
static int scmi_reset_set_notify_enabled(const struct scmi_protocol_handle *ph,
|
||||
u8 evt_id, u32 src_id, bool enable)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = scmi_reset_notify(handle, src_id, enable);
|
||||
ret = scmi_reset_notify(ph, src_id, enable);
|
||||
if (ret)
|
||||
pr_debug("FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n",
|
||||
evt_id, src_id, ret);
|
||||
@ -237,7 +239,8 @@ static int scmi_reset_set_notify_enabled(const struct scmi_handle *handle,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void *scmi_reset_fill_custom_report(const struct scmi_handle *handle,
|
||||
static void *
|
||||
scmi_reset_fill_custom_report(const struct scmi_protocol_handle *ph,
|
||||
u8 evt_id, ktime_t timestamp,
|
||||
const void *payld, size_t payld_sz,
|
||||
void *report, u32 *src_id)
|
||||
@ -257,6 +260,16 @@ static void *scmi_reset_fill_custom_report(const struct scmi_handle *handle,
|
||||
return r;
|
||||
}
|
||||
|
||||
static int scmi_reset_get_num_sources(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
struct scmi_reset_info *pinfo = ph->get_priv(ph);
|
||||
|
||||
if (!pinfo)
|
||||
return -EINVAL;
|
||||
|
||||
return pinfo->num_domains;
|
||||
}
|
||||
|
||||
static const struct scmi_event reset_events[] = {
|
||||
{
|
||||
.id = SCMI_EVENT_RESET_ISSUED,
|
||||
@ -266,28 +279,36 @@ static const struct scmi_event reset_events[] = {
|
||||
};
|
||||
|
||||
static const struct scmi_event_ops reset_event_ops = {
|
||||
.get_num_sources = scmi_reset_get_num_sources,
|
||||
.set_notify_enabled = scmi_reset_set_notify_enabled,
|
||||
.fill_custom_report = scmi_reset_fill_custom_report,
|
||||
};
|
||||
|
||||
static int scmi_reset_protocol_init(struct scmi_handle *handle)
|
||||
static const struct scmi_protocol_events reset_protocol_events = {
|
||||
.queue_sz = SCMI_PROTO_QUEUE_SZ,
|
||||
.ops = &reset_event_ops,
|
||||
.evts = reset_events,
|
||||
.num_events = ARRAY_SIZE(reset_events),
|
||||
};
|
||||
|
||||
static int scmi_reset_protocol_init(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
int domain;
|
||||
u32 version;
|
||||
struct scmi_reset_info *pinfo;
|
||||
|
||||
scmi_version_get(handle, SCMI_PROTOCOL_RESET, &version);
|
||||
ph->xops->version_get(ph, &version);
|
||||
|
||||
dev_dbg(handle->dev, "Reset Version %d.%d\n",
|
||||
dev_dbg(ph->dev, "Reset Version %d.%d\n",
|
||||
PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
|
||||
|
||||
pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL);
|
||||
pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL);
|
||||
if (!pinfo)
|
||||
return -ENOMEM;
|
||||
|
||||
scmi_reset_attributes_get(handle, pinfo);
|
||||
scmi_reset_attributes_get(ph, pinfo);
|
||||
|
||||
pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains,
|
||||
pinfo->dom_info = devm_kcalloc(ph->dev, pinfo->num_domains,
|
||||
sizeof(*pinfo->dom_info), GFP_KERNEL);
|
||||
if (!pinfo->dom_info)
|
||||
return -ENOMEM;
|
||||
@ -295,20 +316,19 @@ static int scmi_reset_protocol_init(struct scmi_handle *handle)
|
||||
for (domain = 0; domain < pinfo->num_domains; domain++) {
|
||||
struct reset_dom_info *dom = pinfo->dom_info + domain;
|
||||
|
||||
scmi_reset_domain_attributes_get(handle, domain, dom);
|
||||
scmi_reset_domain_attributes_get(ph, domain, dom);
|
||||
}
|
||||
|
||||
scmi_register_protocol_events(handle,
|
||||
SCMI_PROTOCOL_RESET, SCMI_PROTO_QUEUE_SZ,
|
||||
&reset_event_ops, reset_events,
|
||||
ARRAY_SIZE(reset_events),
|
||||
pinfo->num_domains);
|
||||
|
||||
pinfo->version = version;
|
||||
handle->reset_ops = &reset_ops;
|
||||
handle->reset_priv = pinfo;
|
||||
|
||||
return 0;
|
||||
return ph->set_priv(ph, pinfo);
|
||||
}
|
||||
|
||||
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_RESET, reset)
|
||||
static const struct scmi_protocol scmi_reset = {
|
||||
.id = SCMI_PROTOCOL_RESET,
|
||||
.owner = THIS_MODULE,
|
||||
.init_instance = &scmi_reset_protocol_init,
|
||||
.ops = &reset_proto_ops,
|
||||
.events = &reset_protocol_events,
|
||||
};
|
||||
|
||||
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(reset, scmi_reset)
|
||||
|
@ -2,7 +2,7 @@
|
||||
/*
|
||||
* SCMI Generic power domain support.
|
||||
*
|
||||
* Copyright (C) 2018 ARM Ltd.
|
||||
* Copyright (C) 2018-2020 ARM Ltd.
|
||||
*/
|
||||
|
||||
#include <linux/err.h>
|
||||
@ -11,9 +11,11 @@
|
||||
#include <linux/pm_domain.h>
|
||||
#include <linux/scmi_protocol.h>
|
||||
|
||||
static const struct scmi_power_proto_ops *power_ops;
|
||||
|
||||
struct scmi_pm_domain {
|
||||
struct generic_pm_domain genpd;
|
||||
const struct scmi_handle *handle;
|
||||
const struct scmi_protocol_handle *ph;
|
||||
const char *name;
|
||||
u32 domain;
|
||||
};
|
||||
@ -25,16 +27,15 @@ static int scmi_pd_power(struct generic_pm_domain *domain, bool power_on)
|
||||
int ret;
|
||||
u32 state, ret_state;
|
||||
struct scmi_pm_domain *pd = to_scmi_pd(domain);
|
||||
const struct scmi_power_ops *ops = pd->handle->power_ops;
|
||||
|
||||
if (power_on)
|
||||
state = SCMI_POWER_STATE_GENERIC_ON;
|
||||
else
|
||||
state = SCMI_POWER_STATE_GENERIC_OFF;
|
||||
|
||||
ret = ops->state_set(pd->handle, pd->domain, state);
|
||||
ret = power_ops->state_set(pd->ph, pd->domain, state);
|
||||
if (!ret)
|
||||
ret = ops->state_get(pd->handle, pd->domain, &ret_state);
|
||||
ret = power_ops->state_get(pd->ph, pd->domain, &ret_state);
|
||||
if (!ret && state != ret_state)
|
||||
return -EIO;
|
||||
|
||||
@ -60,11 +61,16 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
|
||||
struct genpd_onecell_data *scmi_pd_data;
|
||||
struct generic_pm_domain **domains;
|
||||
const struct scmi_handle *handle = sdev->handle;
|
||||
struct scmi_protocol_handle *ph;
|
||||
|
||||
if (!handle || !handle->power_ops)
|
||||
if (!handle)
|
||||
return -ENODEV;
|
||||
|
||||
num_domains = handle->power_ops->num_domains_get(handle);
|
||||
power_ops = handle->devm_get_protocol(sdev, SCMI_PROTOCOL_POWER, &ph);
|
||||
if (IS_ERR(power_ops))
|
||||
return PTR_ERR(power_ops);
|
||||
|
||||
num_domains = power_ops->num_domains_get(ph);
|
||||
if (num_domains < 0) {
|
||||
dev_err(dev, "number of domains not found\n");
|
||||
return num_domains;
|
||||
@ -85,14 +91,14 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
|
||||
for (i = 0; i < num_domains; i++, scmi_pd++) {
|
||||
u32 state;
|
||||
|
||||
if (handle->power_ops->state_get(handle, i, &state)) {
|
||||
if (power_ops->state_get(ph, i, &state)) {
|
||||
dev_warn(dev, "failed to get state for domain %d\n", i);
|
||||
continue;
|
||||
}
|
||||
|
||||
scmi_pd->domain = i;
|
||||
scmi_pd->handle = handle;
|
||||
scmi_pd->name = handle->power_ops->name_get(handle, i);
|
||||
scmi_pd->ph = ph;
|
||||
scmi_pd->name = power_ops->name_get(ph, i);
|
||||
scmi_pd->genpd.name = scmi_pd->name;
|
||||
scmi_pd->genpd.power_off = scmi_pd_power_off;
|
||||
scmi_pd->genpd.power_on = scmi_pd_power_on;
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -9,9 +9,11 @@
|
||||
#include <linux/arm-smccc.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "common.h"
|
||||
@ -23,6 +25,8 @@
|
||||
* @shmem: Transmit/Receive shared memory area
|
||||
* @shmem_lock: Lock to protect access to Tx/Rx shared memory area
|
||||
* @func_id: smc/hvc call function id
|
||||
* @irq: Optional; employed when platforms indicates msg completion by intr.
|
||||
* @tx_complete: Optional, employed only when irq is valid.
|
||||
*/
|
||||
|
||||
struct scmi_smc {
|
||||
@ -30,8 +34,19 @@ struct scmi_smc {
|
||||
struct scmi_shared_mem __iomem *shmem;
|
||||
struct mutex shmem_lock;
|
||||
u32 func_id;
|
||||
int irq;
|
||||
struct completion tx_complete;
|
||||
};
|
||||
|
||||
static irqreturn_t smc_msg_done_isr(int irq, void *data)
|
||||
{
|
||||
struct scmi_smc *scmi_info = data;
|
||||
|
||||
complete(&scmi_info->tx_complete);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static bool smc_chan_available(struct device *dev, int idx)
|
||||
{
|
||||
struct device_node *np = of_parse_phandle(dev->of_node, "shmem", 0);
|
||||
@ -51,7 +66,7 @@ static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
|
||||
struct resource res;
|
||||
struct device_node *np;
|
||||
u32 func_id;
|
||||
int ret;
|
||||
int ret, irq;
|
||||
|
||||
if (!tx)
|
||||
return -ENODEV;
|
||||
@ -79,6 +94,24 @@ static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* If there is an interrupt named "a2p", then the service and
|
||||
* completion of a message is signaled by an interrupt rather than by
|
||||
* the return of the SMC call.
|
||||
*/
|
||||
irq = of_irq_get_byname(cdev->of_node, "a2p");
|
||||
if (irq > 0) {
|
||||
ret = devm_request_irq(dev, irq, smc_msg_done_isr,
|
||||
IRQF_NO_SUSPEND,
|
||||
dev_name(dev), scmi_info);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to setup SCMI smc irq\n");
|
||||
return ret;
|
||||
}
|
||||
init_completion(&scmi_info->tx_complete);
|
||||
scmi_info->irq = irq;
|
||||
}
|
||||
|
||||
scmi_info->func_id = func_id;
|
||||
scmi_info->cinfo = cinfo;
|
||||
mutex_init(&scmi_info->shmem_lock);
|
||||
@ -110,7 +143,14 @@ static int smc_send_message(struct scmi_chan_info *cinfo,
|
||||
|
||||
shmem_tx_prepare(scmi_info->shmem, xfer);
|
||||
|
||||
if (scmi_info->irq)
|
||||
reinit_completion(&scmi_info->tx_complete);
|
||||
|
||||
arm_smccc_1_1_invoke(scmi_info->func_id, 0, 0, 0, 0, 0, 0, 0, &res);
|
||||
|
||||
if (scmi_info->irq)
|
||||
wait_for_completion(&scmi_info->tx_complete);
|
||||
|
||||
scmi_rx_callback(scmi_info->cinfo, shmem_read_header(scmi_info->shmem));
|
||||
|
||||
mutex_unlock(&scmi_info->shmem_lock);
|
||||
|
@ -7,6 +7,7 @@
|
||||
|
||||
#define pr_fmt(fmt) "SCMI Notifications SYSTEM - " fmt
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/scmi_protocol.h>
|
||||
|
||||
#include "common.h"
|
||||
@ -32,40 +33,41 @@ struct scmi_system_info {
|
||||
u32 version;
|
||||
};
|
||||
|
||||
static int scmi_system_request_notify(const struct scmi_handle *handle,
|
||||
static int scmi_system_request_notify(const struct scmi_protocol_handle *ph,
|
||||
bool enable)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct scmi_system_power_state_notify *notify;
|
||||
|
||||
ret = scmi_xfer_get_init(handle, SYSTEM_POWER_STATE_NOTIFY,
|
||||
SCMI_PROTOCOL_SYSTEM, sizeof(*notify), 0, &t);
|
||||
ret = ph->xops->xfer_get_init(ph, SYSTEM_POWER_STATE_NOTIFY,
|
||||
sizeof(*notify), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
notify = t->tx.buf;
|
||||
notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
|
||||
|
||||
ret = scmi_do_xfer(handle, t);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
|
||||
scmi_xfer_put(handle, t);
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_system_set_notify_enabled(const struct scmi_handle *handle,
|
||||
static int scmi_system_set_notify_enabled(const struct scmi_protocol_handle *ph,
|
||||
u8 evt_id, u32 src_id, bool enable)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = scmi_system_request_notify(handle, enable);
|
||||
ret = scmi_system_request_notify(ph, enable);
|
||||
if (ret)
|
||||
pr_debug("FAIL_ENABLE - evt[%X] - ret:%d\n", evt_id, ret);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void *scmi_system_fill_custom_report(const struct scmi_handle *handle,
|
||||
static void *
|
||||
scmi_system_fill_custom_report(const struct scmi_protocol_handle *ph,
|
||||
u8 evt_id, ktime_t timestamp,
|
||||
const void *payld, size_t payld_sz,
|
||||
void *report, u32 *src_id)
|
||||
@ -101,31 +103,38 @@ static const struct scmi_event_ops system_event_ops = {
|
||||
.fill_custom_report = scmi_system_fill_custom_report,
|
||||
};
|
||||
|
||||
static int scmi_system_protocol_init(struct scmi_handle *handle)
|
||||
static const struct scmi_protocol_events system_protocol_events = {
|
||||
.queue_sz = SCMI_PROTO_QUEUE_SZ,
|
||||
.ops = &system_event_ops,
|
||||
.evts = system_events,
|
||||
.num_events = ARRAY_SIZE(system_events),
|
||||
.num_sources = SCMI_SYSTEM_NUM_SOURCES,
|
||||
};
|
||||
|
||||
static int scmi_system_protocol_init(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
u32 version;
|
||||
struct scmi_system_info *pinfo;
|
||||
|
||||
scmi_version_get(handle, SCMI_PROTOCOL_SYSTEM, &version);
|
||||
ph->xops->version_get(ph, &version);
|
||||
|
||||
dev_dbg(handle->dev, "System Power Version %d.%d\n",
|
||||
dev_dbg(ph->dev, "System Power Version %d.%d\n",
|
||||
PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
|
||||
|
||||
pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL);
|
||||
pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL);
|
||||
if (!pinfo)
|
||||
return -ENOMEM;
|
||||
|
||||
scmi_register_protocol_events(handle,
|
||||
SCMI_PROTOCOL_SYSTEM, SCMI_PROTO_QUEUE_SZ,
|
||||
&system_event_ops,
|
||||
system_events,
|
||||
ARRAY_SIZE(system_events),
|
||||
SCMI_SYSTEM_NUM_SOURCES);
|
||||
|
||||
pinfo->version = version;
|
||||
handle->system_priv = pinfo;
|
||||
|
||||
return 0;
|
||||
return ph->set_priv(ph, pinfo);
|
||||
}
|
||||
|
||||
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_SYSTEM, system)
|
||||
static const struct scmi_protocol scmi_system = {
|
||||
.id = SCMI_PROTOCOL_SYSTEM,
|
||||
.owner = THIS_MODULE,
|
||||
.init_instance = &scmi_system_protocol_init,
|
||||
.ops = NULL,
|
||||
.events = &system_protocol_events,
|
||||
};
|
||||
|
||||
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(system, scmi_system)
|
||||
|
378
drivers/firmware/arm_scmi/voltage.c
Normal file
378
drivers/firmware/arm_scmi/voltage.c
Normal file
@ -0,0 +1,378 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* System Control and Management Interface (SCMI) Voltage Protocol
|
||||
*
|
||||
* Copyright (C) 2020 ARM Ltd.
|
||||
*/
|
||||
|
||||
#include <linux/scmi_protocol.h>
|
||||
|
||||
#include "common.h"
|
||||
|
||||
#define VOLTAGE_DOMS_NUM_MASK GENMASK(15, 0)
|
||||
#define REMAINING_LEVELS_MASK GENMASK(31, 16)
|
||||
#define RETURNED_LEVELS_MASK GENMASK(11, 0)
|
||||
|
||||
enum scmi_voltage_protocol_cmd {
|
||||
VOLTAGE_DOMAIN_ATTRIBUTES = 0x3,
|
||||
VOLTAGE_DESCRIBE_LEVELS = 0x4,
|
||||
VOLTAGE_CONFIG_SET = 0x5,
|
||||
VOLTAGE_CONFIG_GET = 0x6,
|
||||
VOLTAGE_LEVEL_SET = 0x7,
|
||||
VOLTAGE_LEVEL_GET = 0x8,
|
||||
};
|
||||
|
||||
#define NUM_VOLTAGE_DOMAINS(x) ((u16)(FIELD_GET(VOLTAGE_DOMS_NUM_MASK, (x))))
|
||||
|
||||
struct scmi_msg_resp_domain_attributes {
|
||||
__le32 attr;
|
||||
u8 name[SCMI_MAX_STR_SIZE];
|
||||
};
|
||||
|
||||
struct scmi_msg_cmd_describe_levels {
|
||||
__le32 domain_id;
|
||||
__le32 level_index;
|
||||
};
|
||||
|
||||
struct scmi_msg_resp_describe_levels {
|
||||
__le32 flags;
|
||||
#define NUM_REMAINING_LEVELS(f) ((u16)(FIELD_GET(REMAINING_LEVELS_MASK, (f))))
|
||||
#define NUM_RETURNED_LEVELS(f) ((u16)(FIELD_GET(RETURNED_LEVELS_MASK, (f))))
|
||||
#define SUPPORTS_SEGMENTED_LEVELS(f) ((f) & BIT(12))
|
||||
__le32 voltage[];
|
||||
};
|
||||
|
||||
struct scmi_msg_cmd_config_set {
|
||||
__le32 domain_id;
|
||||
__le32 config;
|
||||
};
|
||||
|
||||
struct scmi_msg_cmd_level_set {
|
||||
__le32 domain_id;
|
||||
__le32 flags;
|
||||
__le32 voltage_level;
|
||||
};
|
||||
|
||||
struct voltage_info {
|
||||
unsigned int version;
|
||||
unsigned int num_domains;
|
||||
struct scmi_voltage_info *domains;
|
||||
};
|
||||
|
||||
static int scmi_protocol_attributes_get(const struct scmi_protocol_handle *ph,
|
||||
struct voltage_info *vinfo)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
|
||||
ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES, 0,
|
||||
sizeof(__le32), &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret)
|
||||
vinfo->num_domains =
|
||||
NUM_VOLTAGE_DOMAINS(get_unaligned_le32(t->rx.buf));
|
||||
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_init_voltage_levels(struct device *dev,
|
||||
struct scmi_voltage_info *v,
|
||||
u32 num_returned, u32 num_remaining,
|
||||
bool segmented)
|
||||
{
|
||||
u32 num_levels;
|
||||
|
||||
num_levels = num_returned + num_remaining;
|
||||
/*
|
||||
* segmented levels entries are represented by a single triplet
|
||||
* returned all in one go.
|
||||
*/
|
||||
if (!num_levels ||
|
||||
(segmented && (num_remaining || num_returned != 3))) {
|
||||
dev_err(dev,
|
||||
"Invalid level descriptor(%d/%d/%d) for voltage dom %d\n",
|
||||
num_levels, num_returned, num_remaining, v->id);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
v->levels_uv = devm_kcalloc(dev, num_levels, sizeof(u32), GFP_KERNEL);
|
||||
if (!v->levels_uv)
|
||||
return -ENOMEM;
|
||||
|
||||
v->num_levels = num_levels;
|
||||
v->segmented = segmented;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int scmi_voltage_descriptors_get(const struct scmi_protocol_handle *ph,
|
||||
struct voltage_info *vinfo)
|
||||
{
|
||||
int ret, dom;
|
||||
struct scmi_xfer *td, *tl;
|
||||
struct device *dev = ph->dev;
|
||||
struct scmi_msg_resp_domain_attributes *resp_dom;
|
||||
struct scmi_msg_resp_describe_levels *resp_levels;
|
||||
|
||||
ret = ph->xops->xfer_get_init(ph, VOLTAGE_DOMAIN_ATTRIBUTES,
|
||||
sizeof(__le32), sizeof(*resp_dom), &td);
|
||||
if (ret)
|
||||
return ret;
|
||||
resp_dom = td->rx.buf;
|
||||
|
||||
ret = ph->xops->xfer_get_init(ph, VOLTAGE_DESCRIBE_LEVELS,
|
||||
sizeof(__le64), 0, &tl);
|
||||
if (ret)
|
||||
goto outd;
|
||||
resp_levels = tl->rx.buf;
|
||||
|
||||
for (dom = 0; dom < vinfo->num_domains; dom++) {
|
||||
u32 desc_index = 0;
|
||||
u16 num_returned = 0, num_remaining = 0;
|
||||
struct scmi_msg_cmd_describe_levels *cmd;
|
||||
struct scmi_voltage_info *v;
|
||||
|
||||
/* Retrieve domain attributes at first ... */
|
||||
put_unaligned_le32(dom, td->tx.buf);
|
||||
ret = ph->xops->do_xfer(ph, td);
|
||||
/* Skip domain on comms error */
|
||||
if (ret)
|
||||
continue;
|
||||
|
||||
v = vinfo->domains + dom;
|
||||
v->id = dom;
|
||||
v->attributes = le32_to_cpu(resp_dom->attr);
|
||||
strlcpy(v->name, resp_dom->name, SCMI_MAX_STR_SIZE);
|
||||
|
||||
cmd = tl->tx.buf;
|
||||
/* ...then retrieve domain levels descriptions */
|
||||
do {
|
||||
u32 flags;
|
||||
int cnt;
|
||||
|
||||
cmd->domain_id = cpu_to_le32(v->id);
|
||||
cmd->level_index = desc_index;
|
||||
ret = ph->xops->do_xfer(ph, tl);
|
||||
if (ret)
|
||||
break;
|
||||
|
||||
flags = le32_to_cpu(resp_levels->flags);
|
||||
num_returned = NUM_RETURNED_LEVELS(flags);
|
||||
num_remaining = NUM_REMAINING_LEVELS(flags);
|
||||
|
||||
/* Allocate space for num_levels if not already done */
|
||||
if (!v->num_levels) {
|
||||
ret = scmi_init_voltage_levels(dev, v,
|
||||
num_returned,
|
||||
num_remaining,
|
||||
SUPPORTS_SEGMENTED_LEVELS(flags));
|
||||
if (ret)
|
||||
break;
|
||||
}
|
||||
|
||||
if (desc_index + num_returned > v->num_levels) {
|
||||
dev_err(ph->dev,
|
||||
"No. of voltage levels can't exceed %d\n",
|
||||
v->num_levels);
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
for (cnt = 0; cnt < num_returned; cnt++) {
|
||||
s32 val;
|
||||
|
||||
val =
|
||||
(s32)le32_to_cpu(resp_levels->voltage[cnt]);
|
||||
v->levels_uv[desc_index + cnt] = val;
|
||||
if (val < 0)
|
||||
v->negative_volts_allowed = true;
|
||||
}
|
||||
|
||||
desc_index += num_returned;
|
||||
|
||||
ph->xops->reset_rx_to_maxsz(ph, tl);
|
||||
/* check both to avoid infinite loop due to buggy fw */
|
||||
} while (num_returned && num_remaining);
|
||||
|
||||
if (ret) {
|
||||
v->num_levels = 0;
|
||||
devm_kfree(dev, v->levels_uv);
|
||||
}
|
||||
|
||||
ph->xops->reset_rx_to_maxsz(ph, td);
|
||||
}
|
||||
|
||||
ph->xops->xfer_put(ph, tl);
|
||||
outd:
|
||||
ph->xops->xfer_put(ph, td);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int __scmi_voltage_get_u32(const struct scmi_protocol_handle *ph,
|
||||
u8 cmd_id, u32 domain_id, u32 *value)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct voltage_info *vinfo = ph->get_priv(ph);
|
||||
|
||||
if (domain_id >= vinfo->num_domains)
|
||||
return -EINVAL;
|
||||
|
||||
ret = ph->xops->xfer_get_init(ph, cmd_id, sizeof(__le32), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
put_unaligned_le32(domain_id, t->tx.buf);
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
if (!ret)
|
||||
*value = get_unaligned_le32(t->rx.buf);
|
||||
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_voltage_config_set(const struct scmi_protocol_handle *ph,
|
||||
u32 domain_id, u32 config)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct voltage_info *vinfo = ph->get_priv(ph);
|
||||
struct scmi_msg_cmd_config_set *cmd;
|
||||
|
||||
if (domain_id >= vinfo->num_domains)
|
||||
return -EINVAL;
|
||||
|
||||
ret = ph->xops->xfer_get_init(ph, VOLTAGE_CONFIG_SET,
|
||||
sizeof(*cmd), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
cmd = t->tx.buf;
|
||||
cmd->domain_id = cpu_to_le32(domain_id);
|
||||
cmd->config = cpu_to_le32(config & GENMASK(3, 0));
|
||||
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_voltage_config_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain_id, u32 *config)
|
||||
{
|
||||
return __scmi_voltage_get_u32(ph, VOLTAGE_CONFIG_GET,
|
||||
domain_id, config);
|
||||
}
|
||||
|
||||
static int scmi_voltage_level_set(const struct scmi_protocol_handle *ph,
|
||||
u32 domain_id, u32 flags, s32 volt_uV)
|
||||
{
|
||||
int ret;
|
||||
struct scmi_xfer *t;
|
||||
struct voltage_info *vinfo = ph->get_priv(ph);
|
||||
struct scmi_msg_cmd_level_set *cmd;
|
||||
|
||||
if (domain_id >= vinfo->num_domains)
|
||||
return -EINVAL;
|
||||
|
||||
ret = ph->xops->xfer_get_init(ph, VOLTAGE_LEVEL_SET,
|
||||
sizeof(*cmd), 0, &t);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
cmd = t->tx.buf;
|
||||
cmd->domain_id = cpu_to_le32(domain_id);
|
||||
cmd->flags = cpu_to_le32(flags);
|
||||
cmd->voltage_level = cpu_to_le32(volt_uV);
|
||||
|
||||
ret = ph->xops->do_xfer(ph, t);
|
||||
|
||||
ph->xops->xfer_put(ph, t);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_voltage_level_get(const struct scmi_protocol_handle *ph,
|
||||
u32 domain_id, s32 *volt_uV)
|
||||
{
|
||||
return __scmi_voltage_get_u32(ph, VOLTAGE_LEVEL_GET,
|
||||
domain_id, (u32 *)volt_uV);
|
||||
}
|
||||
|
||||
static const struct scmi_voltage_info * __must_check
|
||||
scmi_voltage_info_get(const struct scmi_protocol_handle *ph, u32 domain_id)
|
||||
{
|
||||
struct voltage_info *vinfo = ph->get_priv(ph);
|
||||
|
||||
if (domain_id >= vinfo->num_domains ||
|
||||
!vinfo->domains[domain_id].num_levels)
|
||||
return NULL;
|
||||
|
||||
return vinfo->domains + domain_id;
|
||||
}
|
||||
|
||||
static int scmi_voltage_domains_num_get(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
struct voltage_info *vinfo = ph->get_priv(ph);
|
||||
|
||||
return vinfo->num_domains;
|
||||
}
|
||||
|
||||
static struct scmi_voltage_proto_ops voltage_proto_ops = {
|
||||
.num_domains_get = scmi_voltage_domains_num_get,
|
||||
.info_get = scmi_voltage_info_get,
|
||||
.config_set = scmi_voltage_config_set,
|
||||
.config_get = scmi_voltage_config_get,
|
||||
.level_set = scmi_voltage_level_set,
|
||||
.level_get = scmi_voltage_level_get,
|
||||
};
|
||||
|
||||
static int scmi_voltage_protocol_init(const struct scmi_protocol_handle *ph)
|
||||
{
|
||||
int ret;
|
||||
u32 version;
|
||||
struct voltage_info *vinfo;
|
||||
|
||||
ret = ph->xops->version_get(ph, &version);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
dev_dbg(ph->dev, "Voltage Version %d.%d\n",
|
||||
PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
|
||||
|
||||
vinfo = devm_kzalloc(ph->dev, sizeof(*vinfo), GFP_KERNEL);
|
||||
if (!vinfo)
|
||||
return -ENOMEM;
|
||||
vinfo->version = version;
|
||||
|
||||
ret = scmi_protocol_attributes_get(ph, vinfo);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (vinfo->num_domains) {
|
||||
vinfo->domains = devm_kcalloc(ph->dev, vinfo->num_domains,
|
||||
sizeof(*vinfo->domains),
|
||||
GFP_KERNEL);
|
||||
if (!vinfo->domains)
|
||||
return -ENOMEM;
|
||||
ret = scmi_voltage_descriptors_get(ph, vinfo);
|
||||
if (ret)
|
||||
return ret;
|
||||
} else {
|
||||
dev_warn(ph->dev, "No Voltage domains found.\n");
|
||||
}
|
||||
|
||||
return ph->set_priv(ph, vinfo);
|
||||
}
|
||||
|
||||
static const struct scmi_protocol scmi_voltage = {
|
||||
.id = SCMI_PROTOCOL_VOLTAGE,
|
||||
.init_instance = &scmi_voltage_protocol_init,
|
||||
.ops = &voltage_proto_ops,
|
||||
};
|
||||
|
||||
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(voltage, scmi_voltage)
|
@ -2,7 +2,7 @@
|
||||
/*
|
||||
* System Control and Management Interface(SCMI) based hwmon sensor driver
|
||||
*
|
||||
* Copyright (C) 2018 ARM Ltd.
|
||||
* Copyright (C) 2018-2020 ARM Ltd.
|
||||
* Sudeep Holla <sudeep.holla@arm.com>
|
||||
*/
|
||||
|
||||
@ -13,8 +13,10 @@
|
||||
#include <linux/sysfs.h>
|
||||
#include <linux/thermal.h>
|
||||
|
||||
static const struct scmi_sensor_proto_ops *sensor_ops;
|
||||
|
||||
struct scmi_sensors {
|
||||
const struct scmi_handle *handle;
|
||||
const struct scmi_protocol_handle *ph;
|
||||
const struct scmi_sensor_info **info[hwmon_max];
|
||||
};
|
||||
|
||||
@ -30,7 +32,7 @@ static inline u64 __pow10(u8 x)
|
||||
|
||||
static int scmi_hwmon_scale(const struct scmi_sensor_info *sensor, u64 *value)
|
||||
{
|
||||
s8 scale = sensor->scale;
|
||||
int scale = sensor->scale;
|
||||
u64 f;
|
||||
|
||||
switch (sensor->type) {
|
||||
@ -69,10 +71,9 @@ static int scmi_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
|
||||
u64 value;
|
||||
const struct scmi_sensor_info *sensor;
|
||||
struct scmi_sensors *scmi_sensors = dev_get_drvdata(dev);
|
||||
const struct scmi_handle *h = scmi_sensors->handle;
|
||||
|
||||
sensor = *(scmi_sensors->info[type] + channel);
|
||||
ret = h->sensor_ops->reading_get(h, sensor->id, &value);
|
||||
ret = sensor_ops->reading_get(scmi_sensors->ph, sensor->id, &value);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -169,11 +170,16 @@ static int scmi_hwmon_probe(struct scmi_device *sdev)
|
||||
struct hwmon_channel_info *scmi_hwmon_chan;
|
||||
const struct hwmon_channel_info **ptr_scmi_ci;
|
||||
const struct scmi_handle *handle = sdev->handle;
|
||||
struct scmi_protocol_handle *ph;
|
||||
|
||||
if (!handle || !handle->sensor_ops)
|
||||
if (!handle)
|
||||
return -ENODEV;
|
||||
|
||||
nr_sensors = handle->sensor_ops->count_get(handle);
|
||||
sensor_ops = handle->devm_get_protocol(sdev, SCMI_PROTOCOL_SENSOR, &ph);
|
||||
if (IS_ERR(sensor_ops))
|
||||
return PTR_ERR(sensor_ops);
|
||||
|
||||
nr_sensors = sensor_ops->count_get(ph);
|
||||
if (!nr_sensors)
|
||||
return -EIO;
|
||||
|
||||
@ -181,10 +187,10 @@ static int scmi_hwmon_probe(struct scmi_device *sdev)
|
||||
if (!scmi_sensors)
|
||||
return -ENOMEM;
|
||||
|
||||
scmi_sensors->handle = handle;
|
||||
scmi_sensors->ph = ph;
|
||||
|
||||
for (i = 0; i < nr_sensors; i++) {
|
||||
sensor = handle->sensor_ops->info_get(handle, i);
|
||||
sensor = sensor_ops->info_get(ph, i);
|
||||
if (!sensor)
|
||||
return -EINVAL;
|
||||
|
||||
@ -236,7 +242,7 @@ static int scmi_hwmon_probe(struct scmi_device *sdev)
|
||||
}
|
||||
|
||||
for (i = nr_sensors - 1; i >= 0 ; i--) {
|
||||
sensor = handle->sensor_ops->info_get(handle, i);
|
||||
sensor = sensor_ops->info_get(ph, i);
|
||||
if (!sensor)
|
||||
continue;
|
||||
|
||||
|
@ -1335,7 +1335,7 @@ int dev_pm_opp_of_register_em(struct device *dev, struct cpumask *cpus)
|
||||
goto failed;
|
||||
}
|
||||
|
||||
ret = em_dev_register_perf_domain(dev, nr_opp, &em_cb, cpus);
|
||||
ret = em_dev_register_perf_domain(dev, nr_opp, &em_cb, cpus, true);
|
||||
if (ret)
|
||||
goto failed;
|
||||
|
||||
|
@ -194,6 +194,15 @@ config REGULATOR_ARIZONA_MICSUPP
|
||||
and Wolfson Microelectronic Arizona codecs
|
||||
devices.
|
||||
|
||||
config REGULATOR_ARM_SCMI
|
||||
tristate "SCMI based regulator driver"
|
||||
depends on ARM_SCMI_PROTOCOL && OF
|
||||
help
|
||||
This adds the regulator driver support for ARM platforms using SCMI
|
||||
protocol for device voltage management.
|
||||
This driver uses SCMI Message Protocol driver to interact with the
|
||||
firmware providing the device Voltage functionality.
|
||||
|
||||
config REGULATOR_AS3711
|
||||
tristate "AS3711 PMIC"
|
||||
depends on MFD_AS3711
|
||||
|
@ -26,6 +26,7 @@ obj-$(CONFIG_REGULATOR_AD5398) += ad5398.o
|
||||
obj-$(CONFIG_REGULATOR_ANATOP) += anatop-regulator.o
|
||||
obj-$(CONFIG_REGULATOR_ARIZONA_LDO1) += arizona-ldo1.o
|
||||
obj-$(CONFIG_REGULATOR_ARIZONA_MICSUPP) += arizona-micsupp.o
|
||||
obj-$(CONFIG_REGULATOR_ARM_SCMI) += scmi-regulator.o
|
||||
obj-$(CONFIG_REGULATOR_AS3711) += as3711-regulator.o
|
||||
obj-$(CONFIG_REGULATOR_AS3722) += as3722-regulator.o
|
||||
obj-$(CONFIG_REGULATOR_AXP20X) += axp20x-regulator.o
|
||||
|
@ -413,8 +413,12 @@ device_node *regulator_of_get_init_node(struct device *dev,
|
||||
|
||||
for_each_available_child_of_node(search, child) {
|
||||
name = of_get_property(child, "regulator-compatible", NULL);
|
||||
if (!name)
|
||||
if (!name) {
|
||||
if (!desc->of_match_full_name)
|
||||
name = child->name;
|
||||
else
|
||||
name = child->full_name;
|
||||
}
|
||||
|
||||
if (!strcmp(desc->of_match, name)) {
|
||||
of_node_put(search);
|
||||
|
421
drivers/regulator/scmi-regulator.c
Normal file
421
drivers/regulator/scmi-regulator.c
Normal file
@ -0,0 +1,421 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
//
|
||||
// System Control and Management Interface (SCMI) based regulator driver
|
||||
//
|
||||
// Copyright (C) 2020 ARM Ltd.
|
||||
//
|
||||
// Implements a regulator driver on top of the SCMI Voltage Protocol.
|
||||
//
|
||||
// The ARM SCMI Protocol aims in general to hide as much as possible all the
|
||||
// underlying operational details while providing an abstracted interface for
|
||||
// its users to operate upon: as a consequence the resulting operational
|
||||
// capabilities and configurability of this regulator device are much more
|
||||
// limited than the ones usually available on a standard physical regulator.
|
||||
//
|
||||
// The supported SCMI regulator ops are restricted to the bare minimum:
|
||||
//
|
||||
// - 'status_ops': enable/disable/is_enabled
|
||||
// - 'voltage_ops': get_voltage_sel/set_voltage_sel
|
||||
// list_voltage/map_voltage
|
||||
//
|
||||
// Each SCMI regulator instance is associated, through the means of a proper DT
|
||||
// entry description, to a specific SCMI Voltage Domain.
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
|
||||
#include <linux/linear_range.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/regulator/driver.h>
|
||||
#include <linux/regulator/machine.h>
|
||||
#include <linux/regulator/of_regulator.h>
|
||||
#include <linux/scmi_protocol.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
static const struct scmi_voltage_proto_ops *voltage_ops;
|
||||
|
||||
struct scmi_regulator {
|
||||
u32 id;
|
||||
struct scmi_device *sdev;
|
||||
struct scmi_protocol_handle *ph;
|
||||
struct regulator_dev *rdev;
|
||||
struct device_node *of_node;
|
||||
struct regulator_desc desc;
|
||||
struct regulator_config conf;
|
||||
};
|
||||
|
||||
struct scmi_regulator_info {
|
||||
int num_doms;
|
||||
struct scmi_regulator **sregv;
|
||||
};
|
||||
|
||||
static int scmi_reg_enable(struct regulator_dev *rdev)
|
||||
{
|
||||
struct scmi_regulator *sreg = rdev_get_drvdata(rdev);
|
||||
|
||||
return voltage_ops->config_set(sreg->ph, sreg->id,
|
||||
SCMI_VOLTAGE_ARCH_STATE_ON);
|
||||
}
|
||||
|
||||
static int scmi_reg_disable(struct regulator_dev *rdev)
|
||||
{
|
||||
struct scmi_regulator *sreg = rdev_get_drvdata(rdev);
|
||||
|
||||
return voltage_ops->config_set(sreg->ph, sreg->id,
|
||||
SCMI_VOLTAGE_ARCH_STATE_OFF);
|
||||
}
|
||||
|
||||
static int scmi_reg_is_enabled(struct regulator_dev *rdev)
|
||||
{
|
||||
int ret;
|
||||
u32 config;
|
||||
struct scmi_regulator *sreg = rdev_get_drvdata(rdev);
|
||||
|
||||
ret = voltage_ops->config_get(sreg->ph, sreg->id, &config);
|
||||
if (ret) {
|
||||
dev_err(&sreg->sdev->dev,
|
||||
"Error %d reading regulator %s status.\n",
|
||||
ret, sreg->desc.name);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return config & SCMI_VOLTAGE_ARCH_STATE_ON;
|
||||
}
|
||||
|
||||
static int scmi_reg_get_voltage_sel(struct regulator_dev *rdev)
|
||||
{
|
||||
int ret;
|
||||
s32 volt_uV;
|
||||
struct scmi_regulator *sreg = rdev_get_drvdata(rdev);
|
||||
|
||||
ret = voltage_ops->level_get(sreg->ph, sreg->id, &volt_uV);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return sreg->desc.ops->map_voltage(rdev, volt_uV, volt_uV);
|
||||
}
|
||||
|
||||
static int scmi_reg_set_voltage_sel(struct regulator_dev *rdev,
|
||||
unsigned int selector)
|
||||
{
|
||||
s32 volt_uV;
|
||||
struct scmi_regulator *sreg = rdev_get_drvdata(rdev);
|
||||
|
||||
volt_uV = sreg->desc.ops->list_voltage(rdev, selector);
|
||||
if (volt_uV <= 0)
|
||||
return -EINVAL;
|
||||
|
||||
return voltage_ops->level_set(sreg->ph, sreg->id, 0x0, volt_uV);
|
||||
}
|
||||
|
||||
static const struct regulator_ops scmi_reg_fixed_ops = {
|
||||
.enable = scmi_reg_enable,
|
||||
.disable = scmi_reg_disable,
|
||||
.is_enabled = scmi_reg_is_enabled,
|
||||
};
|
||||
|
||||
static const struct regulator_ops scmi_reg_linear_ops = {
|
||||
.enable = scmi_reg_enable,
|
||||
.disable = scmi_reg_disable,
|
||||
.is_enabled = scmi_reg_is_enabled,
|
||||
.get_voltage_sel = scmi_reg_get_voltage_sel,
|
||||
.set_voltage_sel = scmi_reg_set_voltage_sel,
|
||||
.list_voltage = regulator_list_voltage_linear,
|
||||
.map_voltage = regulator_map_voltage_linear,
|
||||
};
|
||||
|
||||
static const struct regulator_ops scmi_reg_discrete_ops = {
|
||||
.enable = scmi_reg_enable,
|
||||
.disable = scmi_reg_disable,
|
||||
.is_enabled = scmi_reg_is_enabled,
|
||||
.get_voltage_sel = scmi_reg_get_voltage_sel,
|
||||
.set_voltage_sel = scmi_reg_set_voltage_sel,
|
||||
.list_voltage = regulator_list_voltage_table,
|
||||
.map_voltage = regulator_map_voltage_iterate,
|
||||
};
|
||||
|
||||
static int
|
||||
scmi_config_linear_regulator_mappings(struct scmi_regulator *sreg,
|
||||
const struct scmi_voltage_info *vinfo)
|
||||
{
|
||||
s32 delta_uV;
|
||||
|
||||
/*
|
||||
* Note that SCMI voltage domains describable by linear ranges
|
||||
* (segments) {low, high, step} are guaranteed to come in one single
|
||||
* triplet by the SCMI Voltage Domain protocol support itself.
|
||||
*/
|
||||
|
||||
delta_uV = (vinfo->levels_uv[SCMI_VOLTAGE_SEGMENT_HIGH] -
|
||||
vinfo->levels_uv[SCMI_VOLTAGE_SEGMENT_LOW]);
|
||||
|
||||
/* Rule out buggy negative-intervals answers from fw */
|
||||
if (delta_uV < 0) {
|
||||
dev_err(&sreg->sdev->dev,
|
||||
"Invalid volt-range %d-%duV for domain %d\n",
|
||||
vinfo->levels_uv[SCMI_VOLTAGE_SEGMENT_LOW],
|
||||
vinfo->levels_uv[SCMI_VOLTAGE_SEGMENT_HIGH],
|
||||
sreg->id);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!delta_uV) {
|
||||
/* Just one fixed voltage exposed by SCMI */
|
||||
sreg->desc.fixed_uV =
|
||||
vinfo->levels_uv[SCMI_VOLTAGE_SEGMENT_LOW];
|
||||
sreg->desc.n_voltages = 1;
|
||||
sreg->desc.ops = &scmi_reg_fixed_ops;
|
||||
} else {
|
||||
/* One simple linear mapping. */
|
||||
sreg->desc.min_uV =
|
||||
vinfo->levels_uv[SCMI_VOLTAGE_SEGMENT_LOW];
|
||||
sreg->desc.uV_step =
|
||||
vinfo->levels_uv[SCMI_VOLTAGE_SEGMENT_STEP];
|
||||
sreg->desc.linear_min_sel = 0;
|
||||
sreg->desc.n_voltages = delta_uV / sreg->desc.uV_step;
|
||||
sreg->desc.ops = &scmi_reg_linear_ops;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
scmi_config_discrete_regulator_mappings(struct scmi_regulator *sreg,
|
||||
const struct scmi_voltage_info *vinfo)
|
||||
{
|
||||
/* Discrete non linear levels are mapped to volt_table */
|
||||
sreg->desc.n_voltages = vinfo->num_levels;
|
||||
|
||||
if (sreg->desc.n_voltages > 1) {
|
||||
sreg->desc.volt_table = (const unsigned int *)vinfo->levels_uv;
|
||||
sreg->desc.ops = &scmi_reg_discrete_ops;
|
||||
} else {
|
||||
sreg->desc.fixed_uV = vinfo->levels_uv[0];
|
||||
sreg->desc.ops = &scmi_reg_fixed_ops;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int scmi_regulator_common_init(struct scmi_regulator *sreg)
|
||||
{
|
||||
int ret;
|
||||
struct device *dev = &sreg->sdev->dev;
|
||||
const struct scmi_voltage_info *vinfo;
|
||||
|
||||
vinfo = voltage_ops->info_get(sreg->ph, sreg->id);
|
||||
if (!vinfo) {
|
||||
dev_warn(dev, "Failure to get voltage domain %d\n",
|
||||
sreg->id);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
/*
|
||||
* Regulator framework does not fully support negative voltages
|
||||
* so we discard any voltage domain reported as supporting negative
|
||||
* voltages: as a consequence each levels_uv entry is guaranteed to
|
||||
* be non-negative from here on.
|
||||
*/
|
||||
if (vinfo->negative_volts_allowed) {
|
||||
dev_warn(dev, "Negative voltages NOT supported...skip %s\n",
|
||||
sreg->of_node->full_name);
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
sreg->desc.name = devm_kasprintf(dev, GFP_KERNEL, "%s", vinfo->name);
|
||||
if (!sreg->desc.name)
|
||||
return -ENOMEM;
|
||||
|
||||
sreg->desc.id = sreg->id;
|
||||
sreg->desc.type = REGULATOR_VOLTAGE;
|
||||
sreg->desc.owner = THIS_MODULE;
|
||||
sreg->desc.of_match_full_name = true;
|
||||
sreg->desc.of_match = sreg->of_node->full_name;
|
||||
sreg->desc.regulators_node = "regulators";
|
||||
if (vinfo->segmented)
|
||||
ret = scmi_config_linear_regulator_mappings(sreg, vinfo);
|
||||
else
|
||||
ret = scmi_config_discrete_regulator_mappings(sreg, vinfo);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Using the scmi device here to have DT searched from Voltage
|
||||
* protocol node down.
|
||||
*/
|
||||
sreg->conf.dev = dev;
|
||||
|
||||
/* Store for later retrieval via rdev_get_drvdata() */
|
||||
sreg->conf.driver_data = sreg;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int process_scmi_regulator_of_node(struct scmi_device *sdev,
|
||||
struct scmi_protocol_handle *ph,
|
||||
struct device_node *np,
|
||||
struct scmi_regulator_info *rinfo)
|
||||
{
|
||||
u32 dom, ret;
|
||||
|
||||
ret = of_property_read_u32(np, "reg", &dom);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (dom >= rinfo->num_doms)
|
||||
return -ENODEV;
|
||||
|
||||
if (rinfo->sregv[dom]) {
|
||||
dev_err(&sdev->dev,
|
||||
"SCMI Voltage Domain %d already in use. Skipping: %s\n",
|
||||
dom, np->full_name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
rinfo->sregv[dom] = devm_kzalloc(&sdev->dev,
|
||||
sizeof(struct scmi_regulator),
|
||||
GFP_KERNEL);
|
||||
if (!rinfo->sregv[dom])
|
||||
return -ENOMEM;
|
||||
|
||||
rinfo->sregv[dom]->id = dom;
|
||||
rinfo->sregv[dom]->sdev = sdev;
|
||||
rinfo->sregv[dom]->ph = ph;
|
||||
|
||||
/* get hold of good nodes */
|
||||
of_node_get(np);
|
||||
rinfo->sregv[dom]->of_node = np;
|
||||
|
||||
dev_dbg(&sdev->dev,
|
||||
"Found SCMI Regulator entry -- OF node [%d] -> %s\n",
|
||||
dom, np->full_name);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int scmi_regulator_probe(struct scmi_device *sdev)
|
||||
{
|
||||
int d, ret, num_doms;
|
||||
struct device_node *np, *child;
|
||||
const struct scmi_handle *handle = sdev->handle;
|
||||
struct scmi_regulator_info *rinfo;
|
||||
struct scmi_protocol_handle *ph;
|
||||
|
||||
if (!handle)
|
||||
return -ENODEV;
|
||||
|
||||
voltage_ops = handle->devm_get_protocol(sdev,
|
||||
SCMI_PROTOCOL_VOLTAGE, &ph);
|
||||
if (IS_ERR(voltage_ops))
|
||||
return PTR_ERR(voltage_ops);
|
||||
|
||||
num_doms = voltage_ops->num_domains_get(ph);
|
||||
if (num_doms <= 0) {
|
||||
if (!num_doms) {
|
||||
dev_err(&sdev->dev,
|
||||
"number of voltage domains invalid\n");
|
||||
num_doms = -EINVAL;
|
||||
} else {
|
||||
dev_err(&sdev->dev,
|
||||
"failed to get voltage domains - err:%d\n",
|
||||
num_doms);
|
||||
}
|
||||
|
||||
return num_doms;
|
||||
}
|
||||
|
||||
rinfo = devm_kzalloc(&sdev->dev, sizeof(*rinfo), GFP_KERNEL);
|
||||
if (!rinfo)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Allocate pointers array for all possible domains */
|
||||
rinfo->sregv = devm_kcalloc(&sdev->dev, num_doms,
|
||||
sizeof(void *), GFP_KERNEL);
|
||||
if (!rinfo->sregv)
|
||||
return -ENOMEM;
|
||||
|
||||
rinfo->num_doms = num_doms;
|
||||
|
||||
/*
|
||||
* Start collecting into rinfo->sregv possibly good SCMI Regulators as
|
||||
* described by a well-formed DT entry and associated with an existing
|
||||
* plausible SCMI Voltage Domain number, all belonging to this SCMI
|
||||
* platform instance node (handle->dev->of_node).
|
||||
*/
|
||||
np = of_find_node_by_name(handle->dev->of_node, "regulators");
|
||||
for_each_child_of_node(np, child) {
|
||||
ret = process_scmi_regulator_of_node(sdev, ph, child, rinfo);
|
||||
/* abort on any mem issue */
|
||||
if (ret == -ENOMEM)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Register a regulator for each valid regulator-DT-entry that we
|
||||
* can successfully reach via SCMI and has a valid associated voltage
|
||||
* domain.
|
||||
*/
|
||||
for (d = 0; d < num_doms; d++) {
|
||||
struct scmi_regulator *sreg = rinfo->sregv[d];
|
||||
|
||||
/* Skip empty slots */
|
||||
if (!sreg)
|
||||
continue;
|
||||
|
||||
ret = scmi_regulator_common_init(sreg);
|
||||
/* Skip invalid voltage domains */
|
||||
if (ret)
|
||||
continue;
|
||||
|
||||
sreg->rdev = devm_regulator_register(&sdev->dev, &sreg->desc,
|
||||
&sreg->conf);
|
||||
if (IS_ERR(sreg->rdev)) {
|
||||
sreg->rdev = NULL;
|
||||
continue;
|
||||
}
|
||||
|
||||
dev_info(&sdev->dev,
|
||||
"Regulator %s registered for domain [%d]\n",
|
||||
sreg->desc.name, sreg->id);
|
||||
}
|
||||
|
||||
dev_set_drvdata(&sdev->dev, rinfo);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void scmi_regulator_remove(struct scmi_device *sdev)
|
||||
{
|
||||
int d;
|
||||
struct scmi_regulator_info *rinfo;
|
||||
|
||||
rinfo = dev_get_drvdata(&sdev->dev);
|
||||
if (!rinfo)
|
||||
return;
|
||||
|
||||
for (d = 0; d < rinfo->num_doms; d++) {
|
||||
if (!rinfo->sregv[d])
|
||||
continue;
|
||||
of_node_put(rinfo->sregv[d]->of_node);
|
||||
}
|
||||
}
|
||||
|
||||
static const struct scmi_device_id scmi_regulator_id_table[] = {
|
||||
{ SCMI_PROTOCOL_VOLTAGE, "regulator" },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(scmi, scmi_regulator_id_table);
|
||||
|
||||
static struct scmi_driver scmi_drv = {
|
||||
.name = "scmi-regulator",
|
||||
.probe = scmi_regulator_probe,
|
||||
.remove = scmi_regulator_remove,
|
||||
.id_table = scmi_regulator_id_table,
|
||||
};
|
||||
|
||||
module_scmi_driver(scmi_drv);
|
||||
|
||||
MODULE_AUTHOR("Cristian Marussi <cristian.marussi@arm.com>");
|
||||
MODULE_DESCRIPTION("ARM SCMI regulator driver");
|
||||
MODULE_LICENSE("GPL v2");
|
@ -2,7 +2,7 @@
|
||||
/*
|
||||
* ARM System Control and Management Interface (ARM SCMI) reset driver
|
||||
*
|
||||
* Copyright (C) 2019 ARM Ltd.
|
||||
* Copyright (C) 2019-2020 ARM Ltd.
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
@ -11,18 +11,20 @@
|
||||
#include <linux/reset-controller.h>
|
||||
#include <linux/scmi_protocol.h>
|
||||
|
||||
static const struct scmi_reset_proto_ops *reset_ops;
|
||||
|
||||
/**
|
||||
* struct scmi_reset_data - reset controller information structure
|
||||
* @rcdev: reset controller entity
|
||||
* @handle: ARM SCMI handle used for communication with system controller
|
||||
* @ph: ARM SCMI protocol handle used for communication with system controller
|
||||
*/
|
||||
struct scmi_reset_data {
|
||||
struct reset_controller_dev rcdev;
|
||||
const struct scmi_handle *handle;
|
||||
const struct scmi_protocol_handle *ph;
|
||||
};
|
||||
|
||||
#define to_scmi_reset_data(p) container_of((p), struct scmi_reset_data, rcdev)
|
||||
#define to_scmi_handle(p) (to_scmi_reset_data(p)->handle)
|
||||
#define to_scmi_handle(p) (to_scmi_reset_data(p)->ph)
|
||||
|
||||
/**
|
||||
* scmi_reset_assert() - assert device reset
|
||||
@ -37,9 +39,9 @@ struct scmi_reset_data {
|
||||
static int
|
||||
scmi_reset_assert(struct reset_controller_dev *rcdev, unsigned long id)
|
||||
{
|
||||
const struct scmi_handle *handle = to_scmi_handle(rcdev);
|
||||
const struct scmi_protocol_handle *ph = to_scmi_handle(rcdev);
|
||||
|
||||
return handle->reset_ops->assert(handle, id);
|
||||
return reset_ops->assert(ph, id);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -55,9 +57,9 @@ scmi_reset_assert(struct reset_controller_dev *rcdev, unsigned long id)
|
||||
static int
|
||||
scmi_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id)
|
||||
{
|
||||
const struct scmi_handle *handle = to_scmi_handle(rcdev);
|
||||
const struct scmi_protocol_handle *ph = to_scmi_handle(rcdev);
|
||||
|
||||
return handle->reset_ops->deassert(handle, id);
|
||||
return reset_ops->deassert(ph, id);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -73,9 +75,9 @@ scmi_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id)
|
||||
static int
|
||||
scmi_reset_reset(struct reset_controller_dev *rcdev, unsigned long id)
|
||||
{
|
||||
const struct scmi_handle *handle = to_scmi_handle(rcdev);
|
||||
const struct scmi_protocol_handle *ph = to_scmi_handle(rcdev);
|
||||
|
||||
return handle->reset_ops->reset(handle, id);
|
||||
return reset_ops->reset(ph, id);
|
||||
}
|
||||
|
||||
static const struct reset_control_ops scmi_reset_ops = {
|
||||
@ -90,10 +92,15 @@ static int scmi_reset_probe(struct scmi_device *sdev)
|
||||
struct device *dev = &sdev->dev;
|
||||
struct device_node *np = dev->of_node;
|
||||
const struct scmi_handle *handle = sdev->handle;
|
||||
struct scmi_protocol_handle *ph;
|
||||
|
||||
if (!handle || !handle->reset_ops)
|
||||
if (!handle)
|
||||
return -ENODEV;
|
||||
|
||||
reset_ops = handle->devm_get_protocol(sdev, SCMI_PROTOCOL_RESET, &ph);
|
||||
if (IS_ERR(reset_ops))
|
||||
return PTR_ERR(reset_ops);
|
||||
|
||||
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
@ -101,8 +108,8 @@ static int scmi_reset_probe(struct scmi_device *sdev)
|
||||
data->rcdev.ops = &scmi_reset_ops;
|
||||
data->rcdev.owner = THIS_MODULE;
|
||||
data->rcdev.of_node = np;
|
||||
data->rcdev.nr_resets = handle->reset_ops->num_domains_get(handle);
|
||||
data->handle = handle;
|
||||
data->rcdev.nr_resets = reset_ops->num_domains_get(ph);
|
||||
data->ph = ph;
|
||||
|
||||
return devm_reset_controller_register(dev, &data->rcdev);
|
||||
}
|
||||
|
@ -1238,6 +1238,8 @@ static void dwc3_get_properties(struct dwc3 *dwc)
|
||||
u8 rx_max_burst_prd;
|
||||
u8 tx_thr_num_pkt_prd;
|
||||
u8 tx_max_burst_prd;
|
||||
const char *usb_psy_name;
|
||||
int ret;
|
||||
|
||||
/* default to highest possible threshold */
|
||||
lpm_nyet_threshold = 0xf;
|
||||
@ -1263,6 +1265,13 @@ static void dwc3_get_properties(struct dwc3 *dwc)
|
||||
else
|
||||
dwc->sysdev = dwc->dev;
|
||||
|
||||
ret = device_property_read_string(dev, "usb-psy-name", &usb_psy_name);
|
||||
if (ret >= 0) {
|
||||
dwc->usb_psy = power_supply_get_by_name(usb_psy_name);
|
||||
if (!dwc->usb_psy)
|
||||
dev_err(dev, "couldn't get usb power supply\n");
|
||||
}
|
||||
|
||||
dwc->has_lpm_erratum = device_property_read_bool(dev,
|
||||
"snps,has-lpm-erratum");
|
||||
device_property_read_u8(dev, "snps,lpm-nyet-threshold",
|
||||
@ -1619,6 +1628,9 @@ static int dwc3_probe(struct platform_device *pdev)
|
||||
assert_reset:
|
||||
reset_control_assert(dwc->reset);
|
||||
|
||||
if (dwc->usb_psy)
|
||||
power_supply_put(dwc->usb_psy);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -1641,6 +1653,9 @@ static int dwc3_remove(struct platform_device *pdev)
|
||||
dwc3_free_event_buffers(dwc);
|
||||
dwc3_free_scratch_buffers(dwc);
|
||||
|
||||
if (dwc->usb_psy)
|
||||
power_supply_put(dwc->usb_psy);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -30,6 +30,8 @@
|
||||
|
||||
#include <linux/phy/phy.h>
|
||||
|
||||
#include <linux/power_supply.h>
|
||||
|
||||
#define DWC3_MSG_MAX 500
|
||||
|
||||
/* Global constants */
|
||||
@ -986,6 +988,7 @@ struct dwc3_scratchpad_array {
|
||||
* @role_sw: usb_role_switch handle
|
||||
* @role_switch_default_mode: default operation mode of controller while
|
||||
* usb role is USB_ROLE_NONE.
|
||||
* @usb_psy: pointer to power supply interface.
|
||||
* @usb2_phy: pointer to USB2 PHY
|
||||
* @usb3_phy: pointer to USB3 PHY
|
||||
* @usb2_generic_phy: pointer to USB2 PHY
|
||||
@ -1125,6 +1128,8 @@ struct dwc3 {
|
||||
struct usb_role_switch *role_sw;
|
||||
enum usb_dr_mode role_switch_default_mode;
|
||||
|
||||
struct power_supply *usb_psy;
|
||||
|
||||
u32 fladj;
|
||||
u32 irq_gadget;
|
||||
u32 otg_irq;
|
||||
|
@ -2532,11 +2532,19 @@ static void dwc3_gadget_set_ssp_rate(struct usb_gadget *g,
|
||||
static int dwc3_gadget_vbus_draw(struct usb_gadget *g, unsigned int mA)
|
||||
{
|
||||
struct dwc3 *dwc = gadget_to_dwc(g);
|
||||
union power_supply_propval val = {0};
|
||||
int ret;
|
||||
|
||||
if (dwc->usb2_phy)
|
||||
return usb_phy_set_power(dwc->usb2_phy, mA);
|
||||
|
||||
return 0;
|
||||
if (!dwc->usb_psy)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
val.intval = mA;
|
||||
ret = power_supply_set_property(dwc->usb_psy, POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT, &val);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct usb_gadget_ops dwc3_gadget_ops = {
|
||||
|
@ -448,8 +448,13 @@ static int xhci_stop_device(struct xhci_hcd *xhci, int slot_id, int suspend)
|
||||
cmd->status == COMP_COMMAND_RING_STOPPED) {
|
||||
xhci_warn(xhci, "Timeout while waiting for stop endpoint command\n");
|
||||
ret = -ETIME;
|
||||
goto cmd_cleanup;
|
||||
}
|
||||
|
||||
ret = xhci_vendor_sync_dev_ctx(xhci, slot_id);
|
||||
if (ret)
|
||||
xhci_warn(xhci, "Sync device context failed, ret=%d\n", ret);
|
||||
|
||||
cmd_cleanup:
|
||||
xhci_free_command(xhci, cmd);
|
||||
return ret;
|
||||
|
@ -292,6 +292,7 @@ void xhci_ring_free(struct xhci_hcd *xhci, struct xhci_ring *ring)
|
||||
|
||||
kfree(ring);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(xhci_ring_free);
|
||||
|
||||
void xhci_initialize_ring_info(struct xhci_ring *ring,
|
||||
unsigned int cycle_state)
|
||||
@ -361,6 +362,37 @@ static int xhci_alloc_segments_for_ring(struct xhci_hcd *xhci,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct xhci_ring *xhci_vendor_alloc_transfer_ring(struct xhci_hcd *xhci,
|
||||
u32 endpoint_type, enum xhci_ring_type ring_type,
|
||||
gfp_t mem_flags)
|
||||
{
|
||||
struct xhci_vendor_ops *ops = xhci_vendor_get_ops(xhci);
|
||||
|
||||
if (ops && ops->alloc_transfer_ring)
|
||||
return ops->alloc_transfer_ring(xhci, endpoint_type, ring_type,
|
||||
mem_flags);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void xhci_vendor_free_transfer_ring(struct xhci_hcd *xhci,
|
||||
struct xhci_virt_device *virt_dev, unsigned int ep_index)
|
||||
{
|
||||
struct xhci_vendor_ops *ops = xhci_vendor_get_ops(xhci);
|
||||
|
||||
if (ops && ops->free_transfer_ring)
|
||||
ops->free_transfer_ring(xhci, virt_dev, ep_index);
|
||||
}
|
||||
|
||||
static bool xhci_vendor_is_usb_offload_enabled(struct xhci_hcd *xhci,
|
||||
struct xhci_virt_device *virt_dev, unsigned int ep_index)
|
||||
{
|
||||
struct xhci_vendor_ops *ops = xhci_vendor_get_ops(xhci);
|
||||
|
||||
if (ops && ops->is_usb_offload_enabled)
|
||||
return ops->is_usb_offload_enabled(xhci, virt_dev, ep_index);
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* Create a new ring with zero or more segments.
|
||||
*
|
||||
@ -412,7 +444,11 @@ void xhci_free_endpoint_ring(struct xhci_hcd *xhci,
|
||||
struct xhci_virt_device *virt_dev,
|
||||
unsigned int ep_index)
|
||||
{
|
||||
if (xhci_vendor_is_usb_offload_enabled(xhci, virt_dev, ep_index))
|
||||
xhci_vendor_free_transfer_ring(xhci, virt_dev, ep_index);
|
||||
else
|
||||
xhci_ring_free(xhci, virt_dev->eps[ep_index].ring);
|
||||
|
||||
virt_dev->eps[ep_index].ring = NULL;
|
||||
}
|
||||
|
||||
@ -519,6 +555,7 @@ struct xhci_slot_ctx *xhci_get_slot_ctx(struct xhci_hcd *xhci,
|
||||
return (struct xhci_slot_ctx *)
|
||||
(ctx->bytes + CTX_SIZE(xhci->hcc_params));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(xhci_get_slot_ctx);
|
||||
|
||||
struct xhci_ep_ctx *xhci_get_ep_ctx(struct xhci_hcd *xhci,
|
||||
struct xhci_container_ctx *ctx,
|
||||
@ -532,6 +569,7 @@ struct xhci_ep_ctx *xhci_get_ep_ctx(struct xhci_hcd *xhci,
|
||||
return (struct xhci_ep_ctx *)
|
||||
(ctx->bytes + (ep_index * CTX_SIZE(xhci->hcc_params)));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(xhci_get_ep_ctx);
|
||||
|
||||
|
||||
/***************** Streams structures manipulation *************************/
|
||||
@ -889,7 +927,7 @@ void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id)
|
||||
|
||||
for (i = 0; i < 31; i++) {
|
||||
if (dev->eps[i].ring)
|
||||
xhci_ring_free(xhci, dev->eps[i].ring);
|
||||
xhci_free_endpoint_ring(xhci, dev, i);
|
||||
if (dev->eps[i].stream_info)
|
||||
xhci_free_stream_info(xhci,
|
||||
dev->eps[i].stream_info);
|
||||
@ -1488,8 +1526,15 @@ int xhci_endpoint_init(struct xhci_hcd *xhci,
|
||||
mult = 0;
|
||||
|
||||
/* Set up the endpoint ring */
|
||||
if (xhci_vendor_is_usb_offload_enabled(xhci, virt_dev, ep_index) &&
|
||||
usb_endpoint_xfer_isoc(&ep->desc)) {
|
||||
virt_dev->eps[ep_index].new_ring =
|
||||
xhci_vendor_alloc_transfer_ring(xhci, endpoint_type, ring_type, mem_flags);
|
||||
} else {
|
||||
virt_dev->eps[ep_index].new_ring =
|
||||
xhci_ring_alloc(xhci, 2, 1, ring_type, max_packet, mem_flags);
|
||||
}
|
||||
|
||||
if (!virt_dev->eps[ep_index].new_ring)
|
||||
return -ENOMEM;
|
||||
|
||||
@ -1833,6 +1878,24 @@ void xhci_free_erst(struct xhci_hcd *xhci, struct xhci_erst *erst)
|
||||
erst->entries = NULL;
|
||||
}
|
||||
|
||||
static struct xhci_device_context_array *xhci_vendor_alloc_dcbaa(
|
||||
struct xhci_hcd *xhci, gfp_t flags)
|
||||
{
|
||||
struct xhci_vendor_ops *ops = xhci_vendor_get_ops(xhci);
|
||||
|
||||
if (ops && ops->alloc_dcbaa)
|
||||
return ops->alloc_dcbaa(xhci, flags);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void xhci_vendor_free_dcbaa(struct xhci_hcd *xhci)
|
||||
{
|
||||
struct xhci_vendor_ops *ops = xhci_vendor_get_ops(xhci);
|
||||
|
||||
if (ops && ops->free_dcbaa)
|
||||
ops->free_dcbaa(xhci);
|
||||
}
|
||||
|
||||
void xhci_mem_cleanup(struct xhci_hcd *xhci)
|
||||
{
|
||||
struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
|
||||
@ -1887,9 +1950,13 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci)
|
||||
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
|
||||
"Freed medium stream array pool");
|
||||
|
||||
if (xhci_vendor_is_usb_offload_enabled(xhci, NULL, 0)) {
|
||||
xhci_vendor_free_dcbaa(xhci);
|
||||
} else {
|
||||
if (xhci->dcbaa)
|
||||
dma_free_coherent(dev, sizeof(*xhci->dcbaa),
|
||||
xhci->dcbaa, xhci->dcbaa->dma);
|
||||
}
|
||||
xhci->dcbaa = NULL;
|
||||
|
||||
scratchpad_free(xhci);
|
||||
@ -2416,15 +2483,21 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
|
||||
* xHCI section 5.4.6 - doorbell array must be
|
||||
* "physically contiguous and 64-byte (cache line) aligned".
|
||||
*/
|
||||
if (xhci_vendor_is_usb_offload_enabled(xhci, NULL, 0)) {
|
||||
xhci->dcbaa = xhci_vendor_alloc_dcbaa(xhci, flags);
|
||||
if (!xhci->dcbaa)
|
||||
goto fail;
|
||||
} else {
|
||||
xhci->dcbaa = dma_alloc_coherent(dev, sizeof(*xhci->dcbaa), &dma,
|
||||
flags);
|
||||
if (!xhci->dcbaa)
|
||||
goto fail;
|
||||
xhci->dcbaa->dma = dma;
|
||||
}
|
||||
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
|
||||
"// Device context base array address = 0x%llx (DMA), %p (virt)",
|
||||
(unsigned long long)xhci->dcbaa->dma, xhci->dcbaa);
|
||||
xhci_write_64(xhci, dma, &xhci->op_regs->dcbaa_ptr);
|
||||
xhci_write_64(xhci, xhci->dcbaa->dma, &xhci->op_regs->dcbaa_ptr);
|
||||
|
||||
/*
|
||||
* Initialize the ring segment pool. The ring must be a contiguous
|
||||
|
@ -184,6 +184,43 @@ static const struct of_device_id usb_xhci_of_match[] = {
|
||||
MODULE_DEVICE_TABLE(of, usb_xhci_of_match);
|
||||
#endif
|
||||
|
||||
static struct xhci_plat_priv_overwrite xhci_plat_vendor_overwrite;
|
||||
|
||||
int xhci_plat_register_vendor_ops(struct xhci_vendor_ops *vendor_ops)
|
||||
{
|
||||
if (vendor_ops == NULL)
|
||||
return -EINVAL;
|
||||
|
||||
xhci_plat_vendor_overwrite.vendor_ops = vendor_ops;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(xhci_plat_register_vendor_ops);
|
||||
|
||||
static int xhci_vendor_init(struct xhci_hcd *xhci)
|
||||
{
|
||||
struct xhci_vendor_ops *ops = xhci_vendor_get_ops(xhci);
|
||||
struct xhci_plat_priv *priv = xhci_to_priv(xhci);
|
||||
|
||||
if (xhci_plat_vendor_overwrite.vendor_ops)
|
||||
ops = priv->vendor_ops = xhci_plat_vendor_overwrite.vendor_ops;
|
||||
|
||||
if (ops && ops->vendor_init)
|
||||
return ops->vendor_init(xhci);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void xhci_vendor_cleanup(struct xhci_hcd *xhci)
|
||||
{
|
||||
struct xhci_vendor_ops *ops = xhci_vendor_get_ops(xhci);
|
||||
struct xhci_plat_priv *priv = xhci_to_priv(xhci);
|
||||
|
||||
if (ops && ops->vendor_cleanup)
|
||||
ops->vendor_cleanup(xhci);
|
||||
|
||||
priv->vendor_ops = NULL;
|
||||
}
|
||||
|
||||
static int xhci_plat_probe(struct platform_device *pdev)
|
||||
{
|
||||
const struct xhci_plat_priv *priv_match;
|
||||
@ -339,6 +376,10 @@ static int xhci_plat_probe(struct platform_device *pdev)
|
||||
goto put_usb3_hcd;
|
||||
}
|
||||
|
||||
ret = xhci_vendor_init(xhci);
|
||||
if (ret)
|
||||
goto disable_usb_phy;
|
||||
|
||||
hcd->tpl_support = of_usb_host_tpl_support(sysdev->of_node);
|
||||
xhci->shared_hcd->tpl_support = hcd->tpl_support;
|
||||
|
||||
@ -418,8 +459,10 @@ static int xhci_plat_remove(struct platform_device *dev)
|
||||
usb_phy_shutdown(hcd->usb_phy);
|
||||
|
||||
usb_remove_hcd(hcd);
|
||||
usb_put_hcd(shared_hcd);
|
||||
|
||||
xhci_vendor_cleanup(xhci);
|
||||
|
||||
usb_put_hcd(shared_hcd);
|
||||
clk_disable_unprepare(clk);
|
||||
clk_disable_unprepare(reg_clk);
|
||||
usb_put_hcd(hcd);
|
||||
|
@ -13,6 +13,8 @@
|
||||
struct xhci_plat_priv {
|
||||
const char *firmware_name;
|
||||
unsigned long long quirks;
|
||||
struct xhci_vendor_ops *vendor_ops;
|
||||
struct xhci_vendor_data *vendor_data;
|
||||
int (*plat_setup)(struct usb_hcd *);
|
||||
void (*plat_start)(struct usb_hcd *);
|
||||
int (*init_quirk)(struct usb_hcd *);
|
||||
@ -22,4 +24,11 @@ struct xhci_plat_priv {
|
||||
|
||||
#define hcd_to_xhci_priv(h) ((struct xhci_plat_priv *)hcd_to_xhci(h)->priv)
|
||||
#define xhci_to_priv(x) ((struct xhci_plat_priv *)(x)->priv)
|
||||
|
||||
struct xhci_plat_priv_overwrite {
|
||||
struct xhci_vendor_ops *vendor_ops;
|
||||
};
|
||||
|
||||
int xhci_plat_register_vendor_ops(struct xhci_vendor_ops *vendor_ops);
|
||||
|
||||
#endif /* _XHCI_PLAT_H */
|
||||
|
@ -2897,7 +2897,7 @@ static int handle_tx_event(struct xhci_hcd *xhci,
|
||||
* Returns >0 for "possibly more events to process" (caller should call again),
|
||||
* otherwise 0 if done. In future, <0 returns should indicate error code.
|
||||
*/
|
||||
static int xhci_handle_event(struct xhci_hcd *xhci)
|
||||
int xhci_handle_event(struct xhci_hcd *xhci)
|
||||
{
|
||||
union xhci_trb *event;
|
||||
int update_ptrs = 1;
|
||||
@ -2966,13 +2966,14 @@ static int xhci_handle_event(struct xhci_hcd *xhci)
|
||||
*/
|
||||
return 1;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(xhci_handle_event);
|
||||
|
||||
/*
|
||||
* Update Event Ring Dequeue Pointer:
|
||||
* - When all events have finished
|
||||
* - To avoid "Event Ring Full Error" condition
|
||||
*/
|
||||
static void xhci_update_erst_dequeue(struct xhci_hcd *xhci,
|
||||
void xhci_update_erst_dequeue(struct xhci_hcd *xhci,
|
||||
union xhci_trb *event_ring_deq)
|
||||
{
|
||||
u64 temp_64;
|
||||
@ -3002,6 +3003,16 @@ static void xhci_update_erst_dequeue(struct xhci_hcd *xhci,
|
||||
temp_64 |= ERST_EHB;
|
||||
xhci_write_64(xhci, temp_64, &xhci->ir_set->erst_dequeue);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(xhci_update_erst_dequeue);
|
||||
|
||||
static irqreturn_t xhci_vendor_queue_irq_work(struct xhci_hcd *xhci)
|
||||
{
|
||||
struct xhci_vendor_ops *ops = xhci_vendor_get_ops(xhci);
|
||||
|
||||
if (ops && ops->queue_irq_work)
|
||||
return ops->queue_irq_work(xhci);
|
||||
return IRQ_NONE;
|
||||
}
|
||||
|
||||
/*
|
||||
* xHCI spec says we can get an interrupt, and if the HC has an error condition,
|
||||
@ -3037,6 +3048,10 @@ irqreturn_t xhci_irq(struct usb_hcd *hcd)
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = xhci_vendor_queue_irq_work(xhci);
|
||||
if (ret == IRQ_HANDLED)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* Clear the op reg interrupt status first,
|
||||
* so we can receive interrupts from other MSI-X interrupters.
|
||||
|
@ -23,6 +23,7 @@
|
||||
#include "xhci-mtk.h"
|
||||
#include "xhci-debugfs.h"
|
||||
#include "xhci-dbgcap.h"
|
||||
#include "xhci-plat.h"
|
||||
|
||||
#define DRIVER_AUTHOR "Sarah Sharp"
|
||||
#define DRIVER_DESC "'eXtensible' Host Controller (xHC) Driver"
|
||||
@ -1477,6 +1478,11 @@ static int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (xhci_vendor_usb_offload_skip_urb(xhci, urb)) {
|
||||
xhci_dbg(xhci, "skip urb for usb offload\n");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
if (usb_endpoint_xfer_isoc(&urb->ep->desc))
|
||||
num_tds = urb->number_of_packets;
|
||||
else if (usb_endpoint_is_bulk_out(&urb->ep->desc) &&
|
||||
@ -2830,6 +2836,14 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
|
||||
xhci_finish_resource_reservation(xhci, ctrl_ctx);
|
||||
spin_unlock_irqrestore(&xhci->lock, flags);
|
||||
}
|
||||
if (ret)
|
||||
goto failed;
|
||||
|
||||
ret = xhci_vendor_sync_dev_ctx(xhci, udev->slot_id);
|
||||
if (ret)
|
||||
xhci_warn(xhci, "sync device context failed, ret=%d", ret);
|
||||
|
||||
failed:
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -3133,6 +3147,13 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
|
||||
|
||||
wait_for_completion(stop_cmd->completion);
|
||||
|
||||
err = xhci_vendor_sync_dev_ctx(xhci, udev->slot_id);
|
||||
if (err) {
|
||||
xhci_warn(xhci, "%s: Failed to sync device context failed, err=%d",
|
||||
__func__, err);
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&xhci->lock, flags);
|
||||
|
||||
/* config ep command clears toggle if add and drop ep flags are set */
|
||||
@ -3156,6 +3177,11 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
|
||||
|
||||
wait_for_completion(cfg_cmd->completion);
|
||||
|
||||
err = xhci_vendor_sync_dev_ctx(xhci, udev->slot_id);
|
||||
if (err)
|
||||
xhci_warn(xhci, "%s: Failed to sync device context failed, err=%d",
|
||||
__func__, err);
|
||||
|
||||
xhci_free_command(xhci, cfg_cmd);
|
||||
cleanup:
|
||||
xhci_free_command(xhci, stop_cmd);
|
||||
@ -3699,6 +3725,13 @@ static int xhci_discover_or_reset_device(struct usb_hcd *hcd,
|
||||
/* Wait for the Reset Device command to finish */
|
||||
wait_for_completion(reset_device_cmd->completion);
|
||||
|
||||
ret = xhci_vendor_sync_dev_ctx(xhci, slot_id);
|
||||
if (ret) {
|
||||
xhci_warn(xhci, "%s: Failed to sync device context failed, err=%d",
|
||||
__func__, ret);
|
||||
goto command_cleanup;
|
||||
}
|
||||
|
||||
/* The Reset Device command can't fail, according to the 0.95/0.96 spec,
|
||||
* unless we tried to reset a slot ID that wasn't enabled,
|
||||
* or the device wasn't in the addressed or configured state.
|
||||
@ -3938,6 +3971,14 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
|
||||
xhci_warn(xhci, "Could not allocate xHCI USB device data structures\n");
|
||||
goto disable_slot;
|
||||
}
|
||||
|
||||
ret = xhci_vendor_sync_dev_ctx(xhci, slot_id);
|
||||
if (ret) {
|
||||
xhci_warn(xhci, "%s: Failed to sync device context failed, err=%d",
|
||||
__func__, ret);
|
||||
goto disable_slot;
|
||||
}
|
||||
|
||||
vdev = xhci->devs[slot_id];
|
||||
slot_ctx = xhci_get_slot_ctx(xhci, vdev->out_ctx);
|
||||
trace_xhci_alloc_dev(slot_ctx);
|
||||
@ -4071,6 +4112,13 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
|
||||
/* ctrl tx can take up to 5 sec; XXX: need more time for xHC? */
|
||||
wait_for_completion(command->completion);
|
||||
|
||||
ret = xhci_vendor_sync_dev_ctx(xhci, udev->slot_id);
|
||||
if (ret) {
|
||||
xhci_warn(xhci, "%s: Failed to sync device context failed, err=%d",
|
||||
__func__, ret);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* FIXME: From section 4.3.4: "Software shall be responsible for timing
|
||||
* the SetAddress() "recovery interval" required by USB and aborting the
|
||||
* command on a timeout.
|
||||
@ -4217,6 +4265,14 @@ static int __maybe_unused xhci_change_max_exit_latency(struct xhci_hcd *xhci,
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = xhci_vendor_sync_dev_ctx(xhci, udev->slot_id);
|
||||
if (ret) {
|
||||
spin_unlock_irqrestore(&xhci->lock, flags);
|
||||
xhci_warn(xhci, "%s: Failed to sync device context failed, err=%d",
|
||||
__func__, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
xhci_slot_copy(xhci, command->in_ctx, virt_dev->out_ctx);
|
||||
spin_unlock_irqrestore(&xhci->lock, flags);
|
||||
|
||||
@ -4241,6 +4297,30 @@ static int __maybe_unused xhci_change_max_exit_latency(struct xhci_hcd *xhci,
|
||||
return ret;
|
||||
}
|
||||
|
||||
struct xhci_vendor_ops *xhci_vendor_get_ops(struct xhci_hcd *xhci)
|
||||
{
|
||||
return xhci_to_priv(xhci)->vendor_ops;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(xhci_vendor_get_ops);
|
||||
|
||||
int xhci_vendor_sync_dev_ctx(struct xhci_hcd *xhci, unsigned int slot_id)
|
||||
{
|
||||
struct xhci_vendor_ops *ops = xhci_vendor_get_ops(xhci);
|
||||
|
||||
if (ops && ops->sync_dev_ctx)
|
||||
return ops->sync_dev_ctx(xhci, slot_id);
|
||||
return 0;
|
||||
}
|
||||
|
||||
bool xhci_vendor_usb_offload_skip_urb(struct xhci_hcd *xhci, struct urb *urb)
|
||||
{
|
||||
struct xhci_vendor_ops *ops = xhci_vendor_get_ops(xhci);
|
||||
|
||||
if (ops && ops->usb_offload_skip_urb)
|
||||
return ops->usb_offload_skip_urb(xhci, urb);
|
||||
return false;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
|
||||
/* BESL to HIRD Encoding array for USB2 LPM */
|
||||
@ -4980,6 +5060,15 @@ static int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = xhci_vendor_sync_dev_ctx(xhci, hdev->slot_id);
|
||||
if (ret) {
|
||||
xhci_warn(xhci, "%s: Failed to sync device context failed, err=%d",
|
||||
__func__, ret);
|
||||
xhci_free_command(xhci, config_cmd);
|
||||
spin_unlock_irqrestore(&xhci->lock, flags);
|
||||
return ret;
|
||||
}
|
||||
|
||||
xhci_slot_copy(xhci, config_cmd->in_ctx, vdev->out_ctx);
|
||||
ctrl_ctx->add_flags |= cpu_to_le32(SLOT_FLAG);
|
||||
slot_ctx = xhci_get_slot_ctx(xhci, config_cmd->in_ctx);
|
||||
|
@ -2185,6 +2185,44 @@ static inline struct xhci_ring *xhci_urb_to_transfer_ring(struct xhci_hcd *xhci,
|
||||
urb->stream_id);
|
||||
}
|
||||
|
||||
/**
|
||||
* struct xhci_vendor_ops - function callbacks for vendor specific operations
|
||||
* @vendor_init: called for vendor init process
|
||||
* @vendor_cleanup: called for vendor cleanup process
|
||||
* @is_usb_offload_enabled: called to check if usb offload enabled
|
||||
* @queue_irq_work: called to queue vendor specific irq work
|
||||
* @alloc_dcbaa: called when allocating vendor specific dcbaa
|
||||
* @free_dcbaa: called to free vendor specific dcbaa
|
||||
* @alloc_transfer_ring: called when remote transfer ring allocation is required
|
||||
* @free_transfer_ring: called to free vendor specific transfer ring
|
||||
* @sync_dev_ctx: called when synchronization for device context is required
|
||||
*/
|
||||
struct xhci_vendor_ops {
|
||||
int (*vendor_init)(struct xhci_hcd *xhci);
|
||||
void (*vendor_cleanup)(struct xhci_hcd *xhci);
|
||||
bool (*is_usb_offload_enabled)(struct xhci_hcd *xhci,
|
||||
struct xhci_virt_device *vdev,
|
||||
unsigned int ep_index);
|
||||
irqreturn_t (*queue_irq_work)(struct xhci_hcd *xhci);
|
||||
|
||||
struct xhci_device_context_array *(*alloc_dcbaa)(struct xhci_hcd *xhci,
|
||||
gfp_t flags);
|
||||
void (*free_dcbaa)(struct xhci_hcd *xhci);
|
||||
|
||||
struct xhci_ring *(*alloc_transfer_ring)(struct xhci_hcd *xhci,
|
||||
u32 endpoint_type, enum xhci_ring_type ring_type,
|
||||
gfp_t mem_flags);
|
||||
void (*free_transfer_ring)(struct xhci_hcd *xhci,
|
||||
struct xhci_virt_device *virt_dev, unsigned int ep_index);
|
||||
int (*sync_dev_ctx)(struct xhci_hcd *xhci, unsigned int slot_id);
|
||||
bool (*usb_offload_skip_urb)(struct xhci_hcd *xhci, struct urb *urb);
|
||||
};
|
||||
|
||||
struct xhci_vendor_ops *xhci_vendor_get_ops(struct xhci_hcd *xhci);
|
||||
|
||||
int xhci_vendor_sync_dev_ctx(struct xhci_hcd *xhci, unsigned int slot_id);
|
||||
bool xhci_vendor_usb_offload_skip_urb(struct xhci_hcd *xhci, struct urb *urb);
|
||||
|
||||
/*
|
||||
* TODO: As per spec Isochronous IDT transmissions are supported. We bypass
|
||||
* them anyways as we where unable to find a device that matches the
|
||||
|
@ -441,6 +441,9 @@ struct tcpm_port {
|
||||
enum tcpm_ams next_ams;
|
||||
bool in_ams;
|
||||
|
||||
/* Auto vbus discharge status */
|
||||
bool auto_vbus_discharge_enabled;
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
struct dentry *dentry;
|
||||
struct mutex logbuffer_lock; /* log buffer access lock */
|
||||
@ -510,6 +513,9 @@ static const char * const pd_rev[] = {
|
||||
(tcpm_port_is_sink(port) && \
|
||||
((port)->cc1 == TYPEC_CC_RP_3_0 || (port)->cc2 == TYPEC_CC_RP_3_0))
|
||||
|
||||
#define tcpm_wait_for_discharge(port) \
|
||||
(((port)->auto_vbus_discharge_enabled && !(port)->vbus_vsafe0v) ? PD_T_SAFE_0V : 0)
|
||||
|
||||
static enum tcpm_state tcpm_default_state(struct tcpm_port *port)
|
||||
{
|
||||
if (port->port_type == TYPEC_PORT_DRP) {
|
||||
@ -3431,6 +3437,8 @@ static int tcpm_src_attach(struct tcpm_port *port)
|
||||
if (port->tcpc->enable_auto_vbus_discharge) {
|
||||
ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, true);
|
||||
tcpm_log_force(port, "enable vbus discharge ret:%d", ret);
|
||||
if (!ret)
|
||||
port->auto_vbus_discharge_enabled = true;
|
||||
}
|
||||
|
||||
ret = tcpm_set_roles(port, true, TYPEC_SOURCE, tcpm_data_role_for_source(port));
|
||||
@ -3514,6 +3522,8 @@ static void tcpm_reset_port(struct tcpm_port *port)
|
||||
if (port->tcpc->enable_auto_vbus_discharge) {
|
||||
ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, false);
|
||||
tcpm_log_force(port, "Disable vbus discharge ret:%d", ret);
|
||||
if (!ret)
|
||||
port->auto_vbus_discharge_enabled = false;
|
||||
}
|
||||
port->in_ams = false;
|
||||
port->ams = NONE_AMS;
|
||||
@ -3587,6 +3597,8 @@ static int tcpm_snk_attach(struct tcpm_port *port)
|
||||
tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, VSAFE5V);
|
||||
ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, true);
|
||||
tcpm_log_force(port, "enable vbus discharge ret:%d", ret);
|
||||
if (!ret)
|
||||
port->auto_vbus_discharge_enabled = true;
|
||||
}
|
||||
|
||||
ret = tcpm_set_roles(port, true, TYPEC_SINK, tcpm_data_role_for_sink(port));
|
||||
@ -4725,9 +4737,9 @@ static void _tcpm_cc_change(struct tcpm_port *port, enum typec_cc_status cc1,
|
||||
if (tcpm_port_is_disconnected(port) ||
|
||||
!tcpm_port_is_source(port)) {
|
||||
if (port->port_type == TYPEC_PORT_SRC)
|
||||
tcpm_set_state(port, SRC_UNATTACHED, 0);
|
||||
tcpm_set_state(port, SRC_UNATTACHED, tcpm_wait_for_discharge(port));
|
||||
else
|
||||
tcpm_set_state(port, SNK_UNATTACHED, 0);
|
||||
tcpm_set_state(port, SNK_UNATTACHED, tcpm_wait_for_discharge(port));
|
||||
}
|
||||
break;
|
||||
case SNK_UNATTACHED:
|
||||
@ -4758,7 +4770,23 @@ static void _tcpm_cc_change(struct tcpm_port *port, enum typec_cc_status cc1,
|
||||
tcpm_set_state(port, SNK_DEBOUNCED, 0);
|
||||
break;
|
||||
case SNK_READY:
|
||||
if (tcpm_port_is_disconnected(port))
|
||||
/*
|
||||
* EXIT condition is based primarily on vbus disconnect and CC is secondary.
|
||||
* "A port that has entered into USB PD communications with the Source and
|
||||
* has seen the CC voltage exceed vRd-USB may monitor the CC pin to detect
|
||||
* cable disconnect in addition to monitoring VBUS.
|
||||
*
|
||||
* A port that is monitoring the CC voltage for disconnect (but is not in
|
||||
* the process of a USB PD PR_Swap or USB PD FR_Swap) shall transition to
|
||||
* Unattached.SNK within tSinkDisconnect after the CC voltage remains below
|
||||
* vRd-USB for tPDDebounce."
|
||||
*
|
||||
* When set_auto_vbus_discharge_threshold is enabled, CC pins go
|
||||
* away before vbus decays to disconnect threshold. Allow
|
||||
* disconnect to be driven by vbus disconnect when auto vbus
|
||||
* discharge is enabled.
|
||||
*/
|
||||
if (!port->auto_vbus_discharge_enabled && tcpm_port_is_disconnected(port))
|
||||
tcpm_set_state(port, unattached_state(port), 0);
|
||||
else if (!port->pd_capable &&
|
||||
(cc1 != old_cc1 || cc2 != old_cc2))
|
||||
@ -4857,9 +4885,13 @@ static void _tcpm_cc_change(struct tcpm_port *port, enum typec_cc_status cc1,
|
||||
* Ignore CC changes here.
|
||||
*/
|
||||
break;
|
||||
|
||||
default:
|
||||
if (tcpm_port_is_disconnected(port))
|
||||
/*
|
||||
* While acting as sink and auto vbus discharge is enabled, Allow disconnect
|
||||
* to be driven by vbus disconnect.
|
||||
*/
|
||||
if (tcpm_port_is_disconnected(port) && !(port->pwr_role == TYPEC_SINK &&
|
||||
port->auto_vbus_discharge_enabled))
|
||||
tcpm_set_state(port, unattached_state(port), 0);
|
||||
break;
|
||||
}
|
||||
@ -5024,8 +5056,16 @@ static void _tcpm_pd_vbus_off(struct tcpm_port *port)
|
||||
case SRC_TRANSITION_SUPPLY:
|
||||
case SRC_READY:
|
||||
case SRC_WAIT_NEW_CAPABILITIES:
|
||||
/* Force to unattached state to re-initiate connection */
|
||||
tcpm_set_state(port, SRC_UNATTACHED, 0);
|
||||
/*
|
||||
* Force to unattached state to re-initiate connection.
|
||||
* DRP port should move to Unattached.SNK instead of Unattached.SRC if
|
||||
* sink removed. Although sink removal here is due to source's vbus collapse,
|
||||
* treat it the same way for consistency.
|
||||
*/
|
||||
if (port->port_type == TYPEC_PORT_SRC)
|
||||
tcpm_set_state(port, SRC_UNATTACHED, tcpm_wait_for_discharge(port));
|
||||
else
|
||||
tcpm_set_state(port, SNK_UNATTACHED, tcpm_wait_for_discharge(port));
|
||||
break;
|
||||
|
||||
case PORT_RESET:
|
||||
@ -5044,9 +5084,8 @@ static void _tcpm_pd_vbus_off(struct tcpm_port *port)
|
||||
break;
|
||||
|
||||
default:
|
||||
if (port->pwr_role == TYPEC_SINK &&
|
||||
port->attached)
|
||||
tcpm_set_state(port, SNK_UNATTACHED, 0);
|
||||
if (port->pwr_role == TYPEC_SINK && port->attached)
|
||||
tcpm_set_state(port, SNK_UNATTACHED, tcpm_wait_for_discharge(port));
|
||||
break;
|
||||
}
|
||||
}
|
||||
@ -5068,7 +5107,23 @@ static void _tcpm_pd_vbus_vsafe0v(struct tcpm_port *port)
|
||||
tcpm_set_state(port, tcpm_try_snk(port) ? SNK_TRY : SRC_ATTACHED,
|
||||
PD_T_CC_DEBOUNCE);
|
||||
break;
|
||||
case SRC_STARTUP:
|
||||
case SRC_SEND_CAPABILITIES:
|
||||
case SRC_SEND_CAPABILITIES_TIMEOUT:
|
||||
case SRC_NEGOTIATE_CAPABILITIES:
|
||||
case SRC_TRANSITION_SUPPLY:
|
||||
case SRC_READY:
|
||||
case SRC_WAIT_NEW_CAPABILITIES:
|
||||
if (port->auto_vbus_discharge_enabled) {
|
||||
if (port->port_type == TYPEC_PORT_SRC)
|
||||
tcpm_set_state(port, SRC_UNATTACHED, 0);
|
||||
else
|
||||
tcpm_set_state(port, SNK_UNATTACHED, 0);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
if (port->pwr_role == TYPEC_SINK && port->auto_vbus_discharge_enabled)
|
||||
tcpm_set_state(port, SNK_UNATTACHED, 0);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
@ -742,6 +742,7 @@ drivers/firmware/arm_scmi/sensors.c
|
||||
drivers/firmware/arm_scmi/shmem.c
|
||||
drivers/firmware/arm_scmi/smc.c
|
||||
drivers/firmware/arm_scmi/system.c
|
||||
drivers/firmware/arm_scmi/voltage.c
|
||||
drivers/firmware/arm_scpi.c
|
||||
drivers/firmware/efi/arm-runtime.c
|
||||
drivers/firmware/efi/capsule.c
|
||||
@ -4056,7 +4057,9 @@ include/uapi/linux/netfilter/xt_CONNSECMARK.h
|
||||
include/uapi/linux/netfilter/xt_conntrack.h
|
||||
include/uapi/linux/netfilter/xt_CT.h
|
||||
include/uapi/linux/netfilter/xt_dscp.h
|
||||
include/uapi/linux/netfilter/xt_DSCP.h
|
||||
include/uapi/linux/netfilter/xt_ecn.h
|
||||
include/uapi/linux/netfilter/xt_esp.h
|
||||
include/uapi/linux/netfilter/xt_hashlimit.h
|
||||
include/uapi/linux/netfilter/xt_helper.h
|
||||
include/uapi/linux/netfilter/xt_IDLETIMER.h
|
||||
@ -4066,6 +4069,7 @@ include/uapi/linux/netfilter/xt_length.h
|
||||
include/uapi/linux/netfilter/xt_limit.h
|
||||
include/uapi/linux/netfilter/xt_mac.h
|
||||
include/uapi/linux/netfilter/xt_mark.h
|
||||
include/uapi/linux/netfilter/xt_multiport.h
|
||||
include/uapi/linux/netfilter/xt_NFLOG.h
|
||||
include/uapi/linux/netfilter/xt_NFQUEUE.h
|
||||
include/uapi/linux/netfilter/xt_owner.h
|
||||
@ -5370,7 +5374,10 @@ net/netfilter/xt_connmark.c
|
||||
net/netfilter/xt_CONNSECMARK.c
|
||||
net/netfilter/xt_conntrack.c
|
||||
net/netfilter/xt_CT.c
|
||||
net/netfilter/xt_dscp.c
|
||||
net/netfilter/xt_DSCP.c
|
||||
net/netfilter/xt_ecn.c
|
||||
net/netfilter/xt_esp.c
|
||||
net/netfilter/xt_hashlimit.c
|
||||
net/netfilter/xt_helper.c
|
||||
net/netfilter/xt_hl.c
|
||||
@ -5382,6 +5389,7 @@ net/netfilter/xt_limit.c
|
||||
net/netfilter/xt_mac.c
|
||||
net/netfilter/xt_mark.c
|
||||
net/netfilter/xt_MASQUERADE.c
|
||||
net/netfilter/xt_multiport.c
|
||||
net/netfilter/xt_nat.c
|
||||
net/netfilter/xt_NETMAP.c
|
||||
net/netfilter/xt_NFLOG.c
|
||||
@ -5416,6 +5424,7 @@ net/rfkill/rfkill.h
|
||||
net/sched/act_api.c
|
||||
net/sched/cls_api.c
|
||||
net/sched/cls_bpf.c
|
||||
net/sched/cls_fw.c
|
||||
net/sched/cls_u32.c
|
||||
net/sched/ematch.c
|
||||
net/sched/em_u32.c
|
||||
@ -5426,6 +5435,7 @@ net/sched/sch_generic.c
|
||||
net/sched/sch_htb.c
|
||||
net/sched/sch_ingress.c
|
||||
net/sched/sch_mq.c
|
||||
net/sched/sch_prio.c
|
||||
net/socket.c
|
||||
net/sysctl_net.c
|
||||
net/tipc/addr.c
|
||||
|
@ -202,13 +202,20 @@ int anon_inode_getfd(const char *name, const struct file_operations *fops,
|
||||
EXPORT_SYMBOL_GPL(anon_inode_getfd);
|
||||
|
||||
/**
|
||||
* Like anon_inode_getfd(), but creates a new !S_PRIVATE anon inode rather than
|
||||
* reuse the singleton anon inode, and calls the inode_init_security_anon() LSM
|
||||
* hook. This allows the inode to have its own security context and for a LSM
|
||||
* to reject creation of the inode. An optional @context_inode argument is
|
||||
* also added to provide the logical relationship with the new inode. The LSM
|
||||
* may use @context_inode in inode_init_security_anon(), but a reference to it
|
||||
* is not held.
|
||||
* anon_inode_getfd_secure - Like anon_inode_getfd(), but creates a new
|
||||
* !S_PRIVATE anon inode rather than reuse the singleton anon inode, and calls
|
||||
* the inode_init_security_anon() LSM hook. This allows the inode to have its
|
||||
* own security context and for a LSM to reject creation of the inode.
|
||||
*
|
||||
* @name: [in] name of the "class" of the new file
|
||||
* @fops: [in] file operations for the new file
|
||||
* @priv: [in] private data for the new file (will be file's private_data)
|
||||
* @flags: [in] flags
|
||||
* @context_inode:
|
||||
* [in] the logical relationship with the new inode (optional)
|
||||
*
|
||||
* The LSM may use @context_inode in inode_init_security_anon(), but a
|
||||
* reference to it is not held.
|
||||
*/
|
||||
int anon_inode_getfd_secure(const char *name, const struct file_operations *fops,
|
||||
void *priv, int flags,
|
||||
|
@ -29,6 +29,8 @@ struct em_perf_state {
|
||||
* em_perf_domain - Performance domain
|
||||
* @table: List of performance states, in ascending order
|
||||
* @nr_perf_states: Number of performance states
|
||||
* @milliwatts: Flag indicating the power values are in milli-Watts
|
||||
* or some other scale.
|
||||
* @cpus: Cpumask covering the CPUs of the domain. It's here
|
||||
* for performance reasons to avoid potential cache
|
||||
* misses during energy calculations in the scheduler
|
||||
@ -43,6 +45,7 @@ struct em_perf_state {
|
||||
struct em_perf_domain {
|
||||
struct em_perf_state *table;
|
||||
int nr_perf_states;
|
||||
int milliwatts;
|
||||
unsigned long cpus[];
|
||||
};
|
||||
|
||||
@ -79,7 +82,8 @@ struct em_data_callback {
|
||||
struct em_perf_domain *em_cpu_get(int cpu);
|
||||
struct em_perf_domain *em_pd_get(struct device *dev);
|
||||
int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
|
||||
struct em_data_callback *cb, cpumask_t *span);
|
||||
struct em_data_callback *cb, cpumask_t *span,
|
||||
bool milliwatts);
|
||||
void em_dev_unregister_perf_domain(struct device *dev);
|
||||
|
||||
/**
|
||||
@ -189,7 +193,8 @@ struct em_data_callback {};
|
||||
|
||||
static inline
|
||||
int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
|
||||
struct em_data_callback *cb, cpumask_t *span)
|
||||
struct em_data_callback *cb, cpumask_t *span,
|
||||
bool milliwatts)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -223,6 +223,8 @@ enum regulator_type {
|
||||
* @name: Identifying name for the regulator.
|
||||
* @supply_name: Identifying the regulator supply
|
||||
* @of_match: Name used to identify regulator in DT.
|
||||
* @of_match_full_name: A flag to indicate that the of_match string, if
|
||||
* present, should be matched against the node full_name.
|
||||
* @regulators_node: Name of node containing regulator definitions in DT.
|
||||
* @of_parse_cb: Optional callback called only if of_match is present.
|
||||
* Will be called for each regulator parsed from DT, during
|
||||
@ -314,6 +316,7 @@ struct regulator_desc {
|
||||
const char *name;
|
||||
const char *supply_name;
|
||||
const char *of_match;
|
||||
bool of_match_full_name;
|
||||
const char *regulators_node;
|
||||
int (*of_parse_cb)(struct device_node *,
|
||||
const struct regulator_desc *,
|
||||
|
@ -8,6 +8,7 @@
|
||||
#ifndef _LINUX_SCMI_PROTOCOL_H
|
||||
#define _LINUX_SCMI_PROTOCOL_H
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/notifier.h>
|
||||
#include <linux/types.h>
|
||||
@ -56,9 +57,11 @@ struct scmi_clock_info {
|
||||
};
|
||||
|
||||
struct scmi_handle;
|
||||
struct scmi_device;
|
||||
struct scmi_protocol_handle;
|
||||
|
||||
/**
|
||||
* struct scmi_clk_ops - represents the various operations provided
|
||||
* struct scmi_clk_proto_ops - represents the various operations provided
|
||||
* by SCMI Clock Protocol
|
||||
*
|
||||
* @count_get: get the count of clocks provided by SCMI
|
||||
@ -68,21 +71,21 @@ struct scmi_handle;
|
||||
* @enable: enables the specified clock
|
||||
* @disable: disables the specified clock
|
||||
*/
|
||||
struct scmi_clk_ops {
|
||||
int (*count_get)(const struct scmi_handle *handle);
|
||||
struct scmi_clk_proto_ops {
|
||||
int (*count_get)(const struct scmi_protocol_handle *ph);
|
||||
|
||||
const struct scmi_clock_info *(*info_get)
|
||||
(const struct scmi_handle *handle, u32 clk_id);
|
||||
int (*rate_get)(const struct scmi_handle *handle, u32 clk_id,
|
||||
(const struct scmi_protocol_handle *ph, u32 clk_id);
|
||||
int (*rate_get)(const struct scmi_protocol_handle *ph, u32 clk_id,
|
||||
u64 *rate);
|
||||
int (*rate_set)(const struct scmi_handle *handle, u32 clk_id,
|
||||
int (*rate_set)(const struct scmi_protocol_handle *ph, u32 clk_id,
|
||||
u64 rate);
|
||||
int (*enable)(const struct scmi_handle *handle, u32 clk_id);
|
||||
int (*disable)(const struct scmi_handle *handle, u32 clk_id);
|
||||
int (*enable)(const struct scmi_protocol_handle *ph, u32 clk_id);
|
||||
int (*disable)(const struct scmi_protocol_handle *ph, u32 clk_id);
|
||||
};
|
||||
|
||||
/**
|
||||
* struct scmi_perf_ops - represents the various operations provided
|
||||
* struct scmi_perf_proto_ops - represents the various operations provided
|
||||
* by SCMI Performance Protocol
|
||||
*
|
||||
* @limits_set: sets limits on the performance level of a domain
|
||||
@ -99,32 +102,33 @@ struct scmi_clk_ops {
|
||||
* @est_power_get: gets the estimated power cost for a given performance domain
|
||||
* at a given frequency
|
||||
*/
|
||||
struct scmi_perf_ops {
|
||||
int (*limits_set)(const struct scmi_handle *handle, u32 domain,
|
||||
struct scmi_perf_proto_ops {
|
||||
int (*limits_set)(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
u32 max_perf, u32 min_perf);
|
||||
int (*limits_get)(const struct scmi_handle *handle, u32 domain,
|
||||
int (*limits_get)(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
u32 *max_perf, u32 *min_perf);
|
||||
int (*level_set)(const struct scmi_handle *handle, u32 domain,
|
||||
int (*level_set)(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
u32 level, bool poll);
|
||||
int (*level_get)(const struct scmi_handle *handle, u32 domain,
|
||||
int (*level_get)(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
u32 *level, bool poll);
|
||||
int (*device_domain_id)(struct device *dev);
|
||||
int (*transition_latency_get)(const struct scmi_handle *handle,
|
||||
int (*transition_latency_get)(const struct scmi_protocol_handle *ph,
|
||||
struct device *dev);
|
||||
int (*device_opps_add)(const struct scmi_handle *handle,
|
||||
int (*device_opps_add)(const struct scmi_protocol_handle *ph,
|
||||
struct device *dev);
|
||||
int (*freq_set)(const struct scmi_handle *handle, u32 domain,
|
||||
int (*freq_set)(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
unsigned long rate, bool poll);
|
||||
int (*freq_get)(const struct scmi_handle *handle, u32 domain,
|
||||
int (*freq_get)(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
unsigned long *rate, bool poll);
|
||||
int (*est_power_get)(const struct scmi_handle *handle, u32 domain,
|
||||
int (*est_power_get)(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
unsigned long *rate, unsigned long *power);
|
||||
bool (*fast_switch_possible)(const struct scmi_handle *handle,
|
||||
bool (*fast_switch_possible)(const struct scmi_protocol_handle *ph,
|
||||
struct device *dev);
|
||||
bool (*power_scale_mw_get)(const struct scmi_protocol_handle *ph);
|
||||
};
|
||||
|
||||
/**
|
||||
* struct scmi_power_ops - represents the various operations provided
|
||||
* struct scmi_power_proto_ops - represents the various operations provided
|
||||
* by SCMI Power Protocol
|
||||
*
|
||||
* @num_domains_get: get the count of power domains provided by SCMI
|
||||
@ -132,9 +136,9 @@ struct scmi_perf_ops {
|
||||
* @state_set: sets the power state of a power domain
|
||||
* @state_get: gets the power state of a power domain
|
||||
*/
|
||||
struct scmi_power_ops {
|
||||
int (*num_domains_get)(const struct scmi_handle *handle);
|
||||
char *(*name_get)(const struct scmi_handle *handle, u32 domain);
|
||||
struct scmi_power_proto_ops {
|
||||
int (*num_domains_get)(const struct scmi_protocol_handle *ph);
|
||||
char *(*name_get)(const struct scmi_protocol_handle *ph, u32 domain);
|
||||
#define SCMI_POWER_STATE_TYPE_SHIFT 30
|
||||
#define SCMI_POWER_STATE_ID_MASK (BIT(28) - 1)
|
||||
#define SCMI_POWER_STATE_PARAM(type, id) \
|
||||
@ -142,19 +146,186 @@ struct scmi_power_ops {
|
||||
((id) & SCMI_POWER_STATE_ID_MASK))
|
||||
#define SCMI_POWER_STATE_GENERIC_ON SCMI_POWER_STATE_PARAM(0, 0)
|
||||
#define SCMI_POWER_STATE_GENERIC_OFF SCMI_POWER_STATE_PARAM(1, 0)
|
||||
int (*state_set)(const struct scmi_handle *handle, u32 domain,
|
||||
int (*state_set)(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
u32 state);
|
||||
int (*state_get)(const struct scmi_handle *handle, u32 domain,
|
||||
int (*state_get)(const struct scmi_protocol_handle *ph, u32 domain,
|
||||
u32 *state);
|
||||
};
|
||||
|
||||
struct scmi_sensor_info {
|
||||
u32 id;
|
||||
u8 type;
|
||||
s8 scale;
|
||||
u8 num_trip_points;
|
||||
bool async;
|
||||
/**
|
||||
* scmi_sensor_reading - represent a timestamped read
|
||||
*
|
||||
* Used by @reading_get_timestamped method.
|
||||
*
|
||||
* @value: The signed value sensor read.
|
||||
* @timestamp: An unsigned timestamp for the sensor read, as provided by
|
||||
* SCMI platform. Set to zero when not available.
|
||||
*/
|
||||
struct scmi_sensor_reading {
|
||||
long long value;
|
||||
unsigned long long timestamp;
|
||||
};
|
||||
|
||||
/**
|
||||
* scmi_range_attrs - specifies a sensor or axis values' range
|
||||
* @min_range: The minimum value which can be represented by the sensor/axis.
|
||||
* @max_range: The maximum value which can be represented by the sensor/axis.
|
||||
*/
|
||||
struct scmi_range_attrs {
|
||||
long long min_range;
|
||||
long long max_range;
|
||||
};
|
||||
|
||||
/**
|
||||
* scmi_sensor_axis_info - describes one sensor axes
|
||||
* @id: The axes ID.
|
||||
* @type: Axes type. Chosen amongst one of @enum scmi_sensor_class.
|
||||
* @scale: Power-of-10 multiplier applied to the axis unit.
|
||||
* @name: NULL-terminated string representing axes name as advertised by
|
||||
* SCMI platform.
|
||||
* @extended_attrs: Flag to indicate the presence of additional extended
|
||||
* attributes for this axes.
|
||||
* @resolution: Extended attribute representing the resolution of the axes.
|
||||
* Set to 0 if not reported by this axes.
|
||||
* @exponent: Extended attribute representing the power-of-10 multiplier that
|
||||
* is applied to the resolution field. Set to 0 if not reported by
|
||||
* this axes.
|
||||
* @attrs: Extended attributes representing minimum and maximum values
|
||||
* measurable by this axes. Set to 0 if not reported by this sensor.
|
||||
*/
|
||||
struct scmi_sensor_axis_info {
|
||||
unsigned int id;
|
||||
unsigned int type;
|
||||
int scale;
|
||||
char name[SCMI_MAX_STR_SIZE];
|
||||
bool extended_attrs;
|
||||
unsigned int resolution;
|
||||
int exponent;
|
||||
struct scmi_range_attrs attrs;
|
||||
};
|
||||
|
||||
/**
|
||||
* scmi_sensor_intervals_info - describes number and type of available update
|
||||
* intervals
|
||||
* @segmented: Flag for segmented intervals' representation. When True there
|
||||
* will be exactly 3 intervals in @desc, with each entry
|
||||
* representing a member of a segment in this order:
|
||||
* {lowest update interval, highest update interval, step size}
|
||||
* @count: Number of intervals described in @desc.
|
||||
* @desc: Array of @count interval descriptor bitmask represented as detailed in
|
||||
* the SCMI specification: it can be accessed using the accompanying
|
||||
* macros.
|
||||
* @prealloc_pool: A minimal preallocated pool of desc entries used to avoid
|
||||
* lesser-than-64-bytes dynamic allocation for small @count
|
||||
* values.
|
||||
*/
|
||||
struct scmi_sensor_intervals_info {
|
||||
bool segmented;
|
||||
unsigned int count;
|
||||
#define SCMI_SENS_INTVL_SEGMENT_LOW 0
|
||||
#define SCMI_SENS_INTVL_SEGMENT_HIGH 1
|
||||
#define SCMI_SENS_INTVL_SEGMENT_STEP 2
|
||||
unsigned int *desc;
|
||||
#define SCMI_SENS_INTVL_GET_SECS(x) FIELD_GET(GENMASK(20, 5), (x))
|
||||
#define SCMI_SENS_INTVL_GET_EXP(x) \
|
||||
({ \
|
||||
int __signed_exp = FIELD_GET(GENMASK(4, 0), (x)); \
|
||||
\
|
||||
if (__signed_exp & BIT(4)) \
|
||||
__signed_exp |= GENMASK(31, 5); \
|
||||
__signed_exp; \
|
||||
})
|
||||
#define SCMI_MAX_PREALLOC_POOL 16
|
||||
unsigned int prealloc_pool[SCMI_MAX_PREALLOC_POOL];
|
||||
};
|
||||
|
||||
/**
|
||||
* struct scmi_sensor_info - represents information related to one of the
|
||||
* available sensors.
|
||||
* @id: Sensor ID.
|
||||
* @type: Sensor type. Chosen amongst one of @enum scmi_sensor_class.
|
||||
* @scale: Power-of-10 multiplier applied to the sensor unit.
|
||||
* @num_trip_points: Number of maximum configurable trip points.
|
||||
* @async: Flag for asynchronous read support.
|
||||
* @update: Flag for continuouos update notification support.
|
||||
* @timestamped: Flag for timestamped read support.
|
||||
* @tstamp_scale: Power-of-10 multiplier applied to the sensor timestamps to
|
||||
* represent it in seconds.
|
||||
* @num_axis: Number of supported axis if any. Reported as 0 for scalar sensors.
|
||||
* @axis: Pointer to an array of @num_axis descriptors.
|
||||
* @intervals: Descriptor of available update intervals.
|
||||
* @sensor_config: A bitmask reporting the current sensor configuration as
|
||||
* detailed in the SCMI specification: it can accessed and
|
||||
* modified through the accompanying macros.
|
||||
* @name: NULL-terminated string representing sensor name as advertised by
|
||||
* SCMI platform.
|
||||
* @extended_scalar_attrs: Flag to indicate the presence of additional extended
|
||||
* attributes for this sensor.
|
||||
* @sensor_power: Extended attribute representing the average power
|
||||
* consumed by the sensor in microwatts (uW) when it is active.
|
||||
* Reported here only for scalar sensors.
|
||||
* Set to 0 if not reported by this sensor.
|
||||
* @resolution: Extended attribute representing the resolution of the sensor.
|
||||
* Reported here only for scalar sensors.
|
||||
* Set to 0 if not reported by this sensor.
|
||||
* @exponent: Extended attribute representing the power-of-10 multiplier that is
|
||||
* applied to the resolution field.
|
||||
* Reported here only for scalar sensors.
|
||||
* Set to 0 if not reported by this sensor.
|
||||
* @scalar_attrs: Extended attributes representing minimum and maximum
|
||||
* measurable values by this sensor.
|
||||
* Reported here only for scalar sensors.
|
||||
* Set to 0 if not reported by this sensor.
|
||||
*/
|
||||
struct scmi_sensor_info {
|
||||
unsigned int id;
|
||||
unsigned int type;
|
||||
int scale;
|
||||
unsigned int num_trip_points;
|
||||
bool async;
|
||||
bool update;
|
||||
bool timestamped;
|
||||
int tstamp_scale;
|
||||
unsigned int num_axis;
|
||||
struct scmi_sensor_axis_info *axis;
|
||||
struct scmi_sensor_intervals_info intervals;
|
||||
unsigned int sensor_config;
|
||||
#define SCMI_SENS_CFG_UPDATE_SECS_MASK GENMASK(31, 16)
|
||||
#define SCMI_SENS_CFG_GET_UPDATE_SECS(x) \
|
||||
FIELD_GET(SCMI_SENS_CFG_UPDATE_SECS_MASK, (x))
|
||||
|
||||
#define SCMI_SENS_CFG_UPDATE_EXP_MASK GENMASK(15, 11)
|
||||
#define SCMI_SENS_CFG_GET_UPDATE_EXP(x) \
|
||||
({ \
|
||||
int __signed_exp = \
|
||||
FIELD_GET(SCMI_SENS_CFG_UPDATE_EXP_MASK, (x)); \
|
||||
\
|
||||
if (__signed_exp & BIT(4)) \
|
||||
__signed_exp |= GENMASK(31, 5); \
|
||||
__signed_exp; \
|
||||
})
|
||||
|
||||
#define SCMI_SENS_CFG_ROUND_MASK GENMASK(10, 9)
|
||||
#define SCMI_SENS_CFG_ROUND_AUTO 2
|
||||
#define SCMI_SENS_CFG_ROUND_UP 1
|
||||
#define SCMI_SENS_CFG_ROUND_DOWN 0
|
||||
|
||||
#define SCMI_SENS_CFG_TSTAMP_ENABLED_MASK BIT(1)
|
||||
#define SCMI_SENS_CFG_TSTAMP_ENABLE 1
|
||||
#define SCMI_SENS_CFG_TSTAMP_DISABLE 0
|
||||
#define SCMI_SENS_CFG_IS_TSTAMP_ENABLED(x) \
|
||||
FIELD_GET(SCMI_SENS_CFG_TSTAMP_ENABLED_MASK, (x))
|
||||
|
||||
#define SCMI_SENS_CFG_SENSOR_ENABLED_MASK BIT(0)
|
||||
#define SCMI_SENS_CFG_SENSOR_ENABLE 1
|
||||
#define SCMI_SENS_CFG_SENSOR_DISABLE 0
|
||||
char name[SCMI_MAX_STR_SIZE];
|
||||
#define SCMI_SENS_CFG_IS_ENABLED(x) FIELD_GET(BIT(0), (x))
|
||||
bool extended_scalar_attrs;
|
||||
unsigned int sensor_power;
|
||||
unsigned int resolution;
|
||||
int exponent;
|
||||
struct scmi_range_attrs scalar_attrs;
|
||||
};
|
||||
|
||||
/*
|
||||
@ -163,34 +334,137 @@ struct scmi_sensor_info {
|
||||
*/
|
||||
enum scmi_sensor_class {
|
||||
NONE = 0x0,
|
||||
UNSPEC = 0x1,
|
||||
TEMPERATURE_C = 0x2,
|
||||
TEMPERATURE_F = 0x3,
|
||||
TEMPERATURE_K = 0x4,
|
||||
VOLTAGE = 0x5,
|
||||
CURRENT = 0x6,
|
||||
POWER = 0x7,
|
||||
ENERGY = 0x8,
|
||||
CHARGE = 0x9,
|
||||
VOLTAMPERE = 0xA,
|
||||
NITS = 0xB,
|
||||
LUMENS = 0xC,
|
||||
LUX = 0xD,
|
||||
CANDELAS = 0xE,
|
||||
KPA = 0xF,
|
||||
PSI = 0x10,
|
||||
NEWTON = 0x11,
|
||||
CFM = 0x12,
|
||||
RPM = 0x13,
|
||||
HERTZ = 0x14,
|
||||
SECS = 0x15,
|
||||
MINS = 0x16,
|
||||
HOURS = 0x17,
|
||||
DAYS = 0x18,
|
||||
WEEKS = 0x19,
|
||||
MILS = 0x1A,
|
||||
INCHES = 0x1B,
|
||||
FEET = 0x1C,
|
||||
CUBIC_INCHES = 0x1D,
|
||||
CUBIC_FEET = 0x1E,
|
||||
METERS = 0x1F,
|
||||
CUBIC_CM = 0x20,
|
||||
CUBIC_METERS = 0x21,
|
||||
LITERS = 0x22,
|
||||
FLUID_OUNCES = 0x23,
|
||||
RADIANS = 0x24,
|
||||
STERADIANS = 0x25,
|
||||
REVOLUTIONS = 0x26,
|
||||
CYCLES = 0x27,
|
||||
GRAVITIES = 0x28,
|
||||
OUNCES = 0x29,
|
||||
POUNDS = 0x2A,
|
||||
FOOT_POUNDS = 0x2B,
|
||||
OUNCE_INCHES = 0x2C,
|
||||
GAUSS = 0x2D,
|
||||
GILBERTS = 0x2E,
|
||||
HENRIES = 0x2F,
|
||||
FARADS = 0x30,
|
||||
OHMS = 0x31,
|
||||
SIEMENS = 0x32,
|
||||
MOLES = 0x33,
|
||||
BECQUERELS = 0x34,
|
||||
PPM = 0x35,
|
||||
DECIBELS = 0x36,
|
||||
DBA = 0x37,
|
||||
DBC = 0x38,
|
||||
GRAYS = 0x39,
|
||||
SIEVERTS = 0x3A,
|
||||
COLOR_TEMP_K = 0x3B,
|
||||
BITS = 0x3C,
|
||||
BYTES = 0x3D,
|
||||
WORDS = 0x3E,
|
||||
DWORDS = 0x3F,
|
||||
QWORDS = 0x40,
|
||||
PERCENTAGE = 0x41,
|
||||
PASCALS = 0x42,
|
||||
COUNTS = 0x43,
|
||||
GRAMS = 0x44,
|
||||
NEWTON_METERS = 0x45,
|
||||
HITS = 0x46,
|
||||
MISSES = 0x47,
|
||||
RETRIES = 0x48,
|
||||
OVERRUNS = 0x49,
|
||||
UNDERRUNS = 0x4A,
|
||||
COLLISIONS = 0x4B,
|
||||
PACKETS = 0x4C,
|
||||
MESSAGES = 0x4D,
|
||||
CHARS = 0x4E,
|
||||
ERRORS = 0x4F,
|
||||
CORRECTED_ERRS = 0x50,
|
||||
UNCORRECTABLE_ERRS = 0x51,
|
||||
SQ_MILS = 0x52,
|
||||
SQ_INCHES = 0x53,
|
||||
SQ_FEET = 0x54,
|
||||
SQ_CM = 0x55,
|
||||
SQ_METERS = 0x56,
|
||||
RADIANS_SEC = 0x57,
|
||||
BPM = 0x58,
|
||||
METERS_SEC_SQUARED = 0x59,
|
||||
METERS_SEC = 0x5A,
|
||||
CUBIC_METERS_SEC = 0x5B,
|
||||
MM_MERCURY = 0x5C,
|
||||
RADIANS_SEC_SQUARED = 0x5D,
|
||||
OEM_UNIT = 0xFF
|
||||
};
|
||||
|
||||
/**
|
||||
* struct scmi_sensor_ops - represents the various operations provided
|
||||
* struct scmi_sensor_proto_ops - represents the various operations provided
|
||||
* by SCMI Sensor Protocol
|
||||
*
|
||||
* @count_get: get the count of sensors provided by SCMI
|
||||
* @info_get: get the information of the specified sensor
|
||||
* @trip_point_config: selects and configures a trip-point of interest
|
||||
* @reading_get: gets the current value of the sensor
|
||||
* @reading_get_timestamped: gets the current value and timestamp, when
|
||||
* available, of the sensor. (as of v3.0 spec)
|
||||
* Supports multi-axis sensors for sensors which
|
||||
* supports it and if the @reading array size of
|
||||
* @count entry equals the sensor num_axis
|
||||
* @config_get: Get sensor current configuration
|
||||
* @config_set: Set sensor current configuration
|
||||
*/
|
||||
struct scmi_sensor_ops {
|
||||
int (*count_get)(const struct scmi_handle *handle);
|
||||
struct scmi_sensor_proto_ops {
|
||||
int (*count_get)(const struct scmi_protocol_handle *ph);
|
||||
const struct scmi_sensor_info *(*info_get)
|
||||
(const struct scmi_handle *handle, u32 sensor_id);
|
||||
int (*trip_point_config)(const struct scmi_handle *handle,
|
||||
(const struct scmi_protocol_handle *ph, u32 sensor_id);
|
||||
int (*trip_point_config)(const struct scmi_protocol_handle *ph,
|
||||
u32 sensor_id, u8 trip_id, u64 trip_value);
|
||||
int (*reading_get)(const struct scmi_handle *handle, u32 sensor_id,
|
||||
int (*reading_get)(const struct scmi_protocol_handle *ph, u32 sensor_id,
|
||||
u64 *value);
|
||||
int (*reading_get_timestamped)(const struct scmi_protocol_handle *ph,
|
||||
u32 sensor_id, u8 count,
|
||||
struct scmi_sensor_reading *readings);
|
||||
int (*config_get)(const struct scmi_protocol_handle *ph,
|
||||
u32 sensor_id, u32 *sensor_config);
|
||||
int (*config_set)(const struct scmi_protocol_handle *ph,
|
||||
u32 sensor_id, u32 sensor_config);
|
||||
};
|
||||
|
||||
/**
|
||||
* struct scmi_reset_ops - represents the various operations provided
|
||||
* struct scmi_reset_proto_ops - represents the various operations provided
|
||||
* by SCMI Reset Protocol
|
||||
*
|
||||
* @num_domains_get: get the count of reset domains provided by SCMI
|
||||
@ -200,18 +474,80 @@ struct scmi_sensor_ops {
|
||||
* @assert: explicitly assert reset signal of the specified reset domain
|
||||
* @deassert: explicitly deassert reset signal of the specified reset domain
|
||||
*/
|
||||
struct scmi_reset_ops {
|
||||
int (*num_domains_get)(const struct scmi_handle *handle);
|
||||
char *(*name_get)(const struct scmi_handle *handle, u32 domain);
|
||||
int (*latency_get)(const struct scmi_handle *handle, u32 domain);
|
||||
int (*reset)(const struct scmi_handle *handle, u32 domain);
|
||||
int (*assert)(const struct scmi_handle *handle, u32 domain);
|
||||
int (*deassert)(const struct scmi_handle *handle, u32 domain);
|
||||
struct scmi_reset_proto_ops {
|
||||
int (*num_domains_get)(const struct scmi_protocol_handle *ph);
|
||||
char *(*name_get)(const struct scmi_protocol_handle *ph, u32 domain);
|
||||
int (*latency_get)(const struct scmi_protocol_handle *ph, u32 domain);
|
||||
int (*reset)(const struct scmi_protocol_handle *ph, u32 domain);
|
||||
int (*assert)(const struct scmi_protocol_handle *ph, u32 domain);
|
||||
int (*deassert)(const struct scmi_protocol_handle *ph, u32 domain);
|
||||
};
|
||||
|
||||
/**
|
||||
* struct scmi_voltage_info - describe one available SCMI Voltage Domain
|
||||
*
|
||||
* @id: the domain ID as advertised by the platform
|
||||
* @segmented: defines the layout of the entries of array @levels_uv.
|
||||
* - when True the entries are to be interpreted as triplets,
|
||||
* each defining a segment representing a range of equally
|
||||
* space voltages: <lowest_volts>, <highest_volt>, <step_uV>
|
||||
* - when False the entries simply represent a single discrete
|
||||
* supported voltage level
|
||||
* @negative_volts_allowed: True if any of the entries of @levels_uv represent
|
||||
* a negative voltage.
|
||||
* @attributes: represents Voltage Domain advertised attributes
|
||||
* @name: name assigned to the Voltage Domain by platform
|
||||
* @num_levels: number of total entries in @levels_uv.
|
||||
* @levels_uv: array of entries describing the available voltage levels for
|
||||
* this domain.
|
||||
*/
|
||||
struct scmi_voltage_info {
|
||||
unsigned int id;
|
||||
bool segmented;
|
||||
bool negative_volts_allowed;
|
||||
unsigned int attributes;
|
||||
char name[SCMI_MAX_STR_SIZE];
|
||||
unsigned int num_levels;
|
||||
#define SCMI_VOLTAGE_SEGMENT_LOW 0
|
||||
#define SCMI_VOLTAGE_SEGMENT_HIGH 1
|
||||
#define SCMI_VOLTAGE_SEGMENT_STEP 2
|
||||
int *levels_uv;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct scmi_voltage_proto_ops - represents the various operations provided
|
||||
* by SCMI Voltage Protocol
|
||||
*
|
||||
* @num_domains_get: get the count of voltage domains provided by SCMI
|
||||
* @info_get: get the information of the specified domain
|
||||
* @config_set: set the config for the specified domain
|
||||
* @config_get: get the config of the specified domain
|
||||
* @level_set: set the voltage level for the specified domain
|
||||
* @level_get: get the voltage level of the specified domain
|
||||
*/
|
||||
struct scmi_voltage_proto_ops {
|
||||
int (*num_domains_get)(const struct scmi_protocol_handle *ph);
|
||||
const struct scmi_voltage_info __must_check *(*info_get)
|
||||
(const struct scmi_protocol_handle *ph, u32 domain_id);
|
||||
int (*config_set)(const struct scmi_protocol_handle *ph, u32 domain_id,
|
||||
u32 config);
|
||||
#define SCMI_VOLTAGE_ARCH_STATE_OFF 0x0
|
||||
#define SCMI_VOLTAGE_ARCH_STATE_ON 0x7
|
||||
int (*config_get)(const struct scmi_protocol_handle *ph, u32 domain_id,
|
||||
u32 *config);
|
||||
int (*level_set)(const struct scmi_protocol_handle *ph, u32 domain_id,
|
||||
u32 flags, s32 volt_uV);
|
||||
int (*level_get)(const struct scmi_protocol_handle *ph, u32 domain_id,
|
||||
s32 *volt_uV);
|
||||
};
|
||||
|
||||
/**
|
||||
* struct scmi_notify_ops - represents notifications' operations provided by
|
||||
* SCMI core
|
||||
* @devm_register_event_notifier: Managed registration of a notifier_block for
|
||||
* the requested event
|
||||
* @devm_unregister_event_notifier: Managed unregistration of a notifier_block
|
||||
* for the requested event
|
||||
* @register_event_notifier: Register a notifier_block for the requested event
|
||||
* @unregister_event_notifier: Unregister a notifier_block for the requested
|
||||
* event
|
||||
@ -221,7 +557,9 @@ struct scmi_reset_ops {
|
||||
* tuple: (proto_id, evt_id, src_id) using the provided register/unregister
|
||||
* interface where:
|
||||
*
|
||||
* @handle: The handle identifying the platform instance to use
|
||||
* @sdev: The scmi_device to use when calling the devres managed ops devm_
|
||||
* @handle: The handle identifying the platform instance to use, when not
|
||||
* calling the managed ops devm_
|
||||
* @proto_id: The protocol ID as in SCMI Specification
|
||||
* @evt_id: The message ID of the desired event as in SCMI Specification
|
||||
* @src_id: A pointer to the desired source ID if different sources are
|
||||
@ -244,6 +582,13 @@ struct scmi_reset_ops {
|
||||
* @report: A custom struct describing the specific event delivered
|
||||
*/
|
||||
struct scmi_notify_ops {
|
||||
int (*devm_register_event_notifier)(struct scmi_device *sdev,
|
||||
u8 proto_id, u8 evt_id, u32 *src_id,
|
||||
struct notifier_block *nb);
|
||||
int (*devm_unregister_event_notifier)(struct scmi_device *sdev,
|
||||
u8 proto_id, u8 evt_id,
|
||||
u32 *src_id,
|
||||
struct notifier_block *nb);
|
||||
int (*register_event_notifier)(const struct scmi_handle *handle,
|
||||
u8 proto_id, u8 evt_id, u32 *src_id,
|
||||
struct notifier_block *nb);
|
||||
@ -257,42 +602,29 @@ struct scmi_notify_ops {
|
||||
*
|
||||
* @dev: pointer to the SCMI device
|
||||
* @version: pointer to the structure containing SCMI version information
|
||||
* @power_ops: pointer to set of power protocol operations
|
||||
* @perf_ops: pointer to set of performance protocol operations
|
||||
* @clk_ops: pointer to set of clock protocol operations
|
||||
* @sensor_ops: pointer to set of sensor protocol operations
|
||||
* @reset_ops: pointer to set of reset protocol operations
|
||||
* @devm_acquire_protocol: devres managed method to get hold of a protocol,
|
||||
* causing its initialization and related resource
|
||||
* accounting
|
||||
* @devm_get_protocol: devres managed method to acquire a protocol, causing
|
||||
* its initialization and resource accounting, while getting
|
||||
* protocol specific operations and a dedicated protocol
|
||||
* handler
|
||||
* @devm_put_protocol: devres managed method to release a protocol acquired
|
||||
* with devm_acquire/get_protocol
|
||||
* @notify_ops: pointer to set of notifications related operations
|
||||
* @perf_priv: pointer to private data structure specific to performance
|
||||
* protocol(for internal use only)
|
||||
* @clk_priv: pointer to private data structure specific to clock
|
||||
* protocol(for internal use only)
|
||||
* @power_priv: pointer to private data structure specific to power
|
||||
* protocol(for internal use only)
|
||||
* @sensor_priv: pointer to private data structure specific to sensors
|
||||
* protocol(for internal use only)
|
||||
* @reset_priv: pointer to private data structure specific to reset
|
||||
* protocol(for internal use only)
|
||||
* @notify_priv: pointer to private data structure specific to notifications
|
||||
* (for internal use only)
|
||||
*/
|
||||
struct scmi_handle {
|
||||
struct device *dev;
|
||||
struct scmi_revision_info *version;
|
||||
const struct scmi_perf_ops *perf_ops;
|
||||
const struct scmi_clk_ops *clk_ops;
|
||||
const struct scmi_power_ops *power_ops;
|
||||
const struct scmi_sensor_ops *sensor_ops;
|
||||
const struct scmi_reset_ops *reset_ops;
|
||||
|
||||
int __must_check (*devm_acquire_protocol)(struct scmi_device *sdev,
|
||||
u8 proto);
|
||||
const void __must_check *
|
||||
(*devm_get_protocol)(struct scmi_device *sdev, u8 proto,
|
||||
struct scmi_protocol_handle **ph);
|
||||
void (*devm_put_protocol)(struct scmi_device *sdev, u8 proto);
|
||||
|
||||
const struct scmi_notify_ops *notify_ops;
|
||||
/* for protocol internal use */
|
||||
void *perf_priv;
|
||||
void *clk_priv;
|
||||
void *power_priv;
|
||||
void *sensor_priv;
|
||||
void *reset_priv;
|
||||
void *notify_priv;
|
||||
void *system_priv;
|
||||
};
|
||||
|
||||
enum scmi_std_protocol {
|
||||
@ -303,6 +635,7 @@ enum scmi_std_protocol {
|
||||
SCMI_PROTOCOL_CLOCK = 0x14,
|
||||
SCMI_PROTOCOL_SENSOR = 0x15,
|
||||
SCMI_PROTOCOL_RESET = 0x16,
|
||||
SCMI_PROTOCOL_VOLTAGE = 0x17,
|
||||
};
|
||||
|
||||
enum scmi_system_events {
|
||||
@ -376,9 +709,21 @@ static inline void scmi_driver_unregister(struct scmi_driver *driver) {}
|
||||
#define module_scmi_driver(__scmi_driver) \
|
||||
module_driver(__scmi_driver, scmi_register, scmi_unregister)
|
||||
|
||||
typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
|
||||
int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
|
||||
void scmi_protocol_unregister(int protocol_id);
|
||||
/**
|
||||
* module_scmi_protocol() - Helper macro for registering a scmi protocol
|
||||
* @__scmi_protocol: scmi_protocol structure
|
||||
*
|
||||
* Helper macro for scmi drivers to set up proper module init / exit
|
||||
* functions. Replaces module_init() and module_exit() and keeps people from
|
||||
* printing pointless things to the kernel log when their driver is loaded.
|
||||
*/
|
||||
#define module_scmi_protocol(__scmi_protocol) \
|
||||
module_driver(__scmi_protocol, \
|
||||
scmi_protocol_register, scmi_protocol_unregister)
|
||||
|
||||
struct scmi_protocol;
|
||||
int scmi_protocol_register(const struct scmi_protocol *proto);
|
||||
void scmi_protocol_unregister(const struct scmi_protocol *proto);
|
||||
|
||||
/* SCMI Notification API - Custom Event Reports */
|
||||
enum scmi_notification_events {
|
||||
@ -386,6 +731,7 @@ enum scmi_notification_events {
|
||||
SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED = 0x0,
|
||||
SCMI_EVENT_PERFORMANCE_LEVEL_CHANGED = 0x1,
|
||||
SCMI_EVENT_SENSOR_TRIP_POINT_EVENT = 0x0,
|
||||
SCMI_EVENT_SENSOR_UPDATE = 0x1,
|
||||
SCMI_EVENT_RESET_ISSUED = 0x0,
|
||||
SCMI_EVENT_BASE_ERROR_EVENT = 0x0,
|
||||
SCMI_EVENT_SYSTEM_POWER_STATE_NOTIFIER = 0x0,
|
||||
@ -427,6 +773,14 @@ struct scmi_sensor_trip_point_report {
|
||||
unsigned int trip_point_desc;
|
||||
};
|
||||
|
||||
struct scmi_sensor_update_report {
|
||||
ktime_t timestamp;
|
||||
unsigned int agent_id;
|
||||
unsigned int sensor_id;
|
||||
unsigned int readings_count;
|
||||
struct scmi_sensor_reading readings[];
|
||||
};
|
||||
|
||||
struct scmi_reset_issued_report {
|
||||
ktime_t timestamp;
|
||||
unsigned int agent_id;
|
||||
|
@ -3341,14 +3341,17 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
|
||||
|
||||
void cpuset_cpus_allowed_fallback(struct task_struct *tsk)
|
||||
{
|
||||
const struct cpumask *cs_mask = task_cs(tsk)->cpus_allowed;
|
||||
const struct cpumask *possible_mask = task_cpu_possible_mask(tsk);
|
||||
|
||||
if (!is_in_v2_mode() || !cpumask_subset(cs_mask, possible_mask))
|
||||
return; /* select_fallback_rq will try harder */
|
||||
const struct cpumask *cs_mask;
|
||||
|
||||
rcu_read_lock();
|
||||
cs_mask = task_cs(tsk)->cpus_allowed;
|
||||
|
||||
if (!is_in_v2_mode() || !cpumask_subset(cs_mask, possible_mask))
|
||||
goto unlock; /* select_fallback_rq will try harder */
|
||||
|
||||
do_set_cpus_allowed(tsk, cs_mask);
|
||||
unlock:
|
||||
rcu_read_unlock();
|
||||
|
||||
/*
|
||||
|
@ -52,6 +52,17 @@ static int em_debug_cpus_show(struct seq_file *s, void *unused)
|
||||
}
|
||||
DEFINE_SHOW_ATTRIBUTE(em_debug_cpus);
|
||||
|
||||
static int em_debug_units_show(struct seq_file *s, void *unused)
|
||||
{
|
||||
struct em_perf_domain *pd = s->private;
|
||||
char *units = pd->milliwatts ? "milliWatts" : "bogoWatts";
|
||||
|
||||
seq_printf(s, "%s\n", units);
|
||||
|
||||
return 0;
|
||||
}
|
||||
DEFINE_SHOW_ATTRIBUTE(em_debug_units);
|
||||
|
||||
static void em_debug_create_pd(struct device *dev)
|
||||
{
|
||||
struct dentry *d;
|
||||
@ -64,6 +75,8 @@ static void em_debug_create_pd(struct device *dev)
|
||||
debugfs_create_file("cpus", 0444, d, dev->em_pd->cpus,
|
||||
&em_debug_cpus_fops);
|
||||
|
||||
debugfs_create_file("units", 0444, d, dev->em_pd, &em_debug_units_fops);
|
||||
|
||||
/* Create a sub-directory for each performance state */
|
||||
for (i = 0; i < dev->em_pd->nr_perf_states; i++)
|
||||
em_debug_create_ps(&dev->em_pd->table[i], d);
|
||||
@ -250,17 +263,24 @@ EXPORT_SYMBOL_GPL(em_cpu_get);
|
||||
* @cpus : Pointer to cpumask_t, which in case of a CPU device is
|
||||
* obligatory. It can be taken from i.e. 'policy->cpus'. For other
|
||||
* type of devices this should be set to NULL.
|
||||
* @milliwatts : Flag indicating that the power values are in milliWatts or
|
||||
* in some other scale. It must be set properly.
|
||||
*
|
||||
* Create Energy Model tables for a performance domain using the callbacks
|
||||
* defined in cb.
|
||||
*
|
||||
* The @milliwatts is important to set with correct value. Some kernel
|
||||
* sub-systems might rely on this flag and check if all devices in the EM are
|
||||
* using the same scale.
|
||||
*
|
||||
* If multiple clients register the same performance domain, all but the first
|
||||
* registration will be ignored.
|
||||
*
|
||||
* Return 0 on success
|
||||
*/
|
||||
int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
|
||||
struct em_data_callback *cb, cpumask_t *cpus)
|
||||
struct em_data_callback *cb, cpumask_t *cpus,
|
||||
bool milliwatts)
|
||||
{
|
||||
unsigned long cap, prev_cap = 0;
|
||||
int cpu, ret;
|
||||
@ -313,6 +333,8 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
|
||||
if (ret)
|
||||
goto unlock;
|
||||
|
||||
dev->em_pd->milliwatts = milliwatts;
|
||||
|
||||
em_debug_create_pd(dev);
|
||||
dev_info(dev, "EM: created perf domain\n");
|
||||
|
||||
|
@ -56,15 +56,29 @@ else
|
||||
obj-y := $(filter-out %/, $(obj-y))
|
||||
endif
|
||||
|
||||
# Expand $(foo-objs) $(foo-y) by calling $(call suffix-search,foo.o,-objs -y)
|
||||
suffix-search = $(foreach s,$(2),$($(1:.o=$s)))
|
||||
# If $(foo-objs), $(foo-y), $(foo-m), or $(foo-) exists, foo.o is a composite object
|
||||
multi-used-y := $(sort $(foreach m,$(obj-y), $(if $(strip $($(m:.o=-objs)) $($(m:.o=-y)) $($(m:.o=-))), $(m))))
|
||||
multi-used-m := $(sort $(foreach m,$(obj-m), $(if $(strip $($(m:.o=-objs)) $($(m:.o=-y)) $($(m:.o=-m)) $($(m:.o=-))), $(m))))
|
||||
# Do this recursively to find nested composite objects.
|
||||
# foo-y may contain foo.o bar.o . For backwards compatibility, don't treat this
|
||||
# foo.o as a nested object
|
||||
multi-search = $(sort $(foreach m,$(1),$(if $(strip $(call suffix-search,$(m),$(2) -)),\
|
||||
$(if $(filter $(m),$(strip $(call suffix-search,$(m),$(2) -))),,\
|
||||
$(m) $(call multi-search,$(call suffix-search,$(m),$(2)),$(2))))))
|
||||
multi-used-y := $(call multi-search,$(obj-y),-objs -y)
|
||||
multi-used-m := $(call multi-search,$(obj-m),-objs -y -m)
|
||||
multi-used := $(multi-used-y) $(multi-used-m)
|
||||
|
||||
# Replace multi-part objects by their individual parts,
|
||||
# including built-in.a from subdirectories
|
||||
real-obj-y := $(foreach m, $(obj-y), $(if $(strip $($(m:.o=-objs)) $($(m:.o=-y)) $($(m:.o=-))),$($(m:.o=-objs)) $($(m:.o=-y)),$(m)))
|
||||
real-obj-m := $(foreach m, $(obj-m), $(if $(strip $($(m:.o=-objs)) $($(m:.o=-y)) $($(m:.o=-m)) $($(m:.o=-))),$($(m:.o=-objs)) $($(m:.o=-y)) $($(m:.o=-m)),$(m)))
|
||||
# Recursively search for real files. For backwards compatibility,
|
||||
# foo-y may contain foo.o bar.o . foo.o in this context is a real object, and
|
||||
# shouldn't be recursed into.
|
||||
real-search = $(foreach m,$(1), $(if $(strip $(call suffix-search,$(m),$(2) -)), \
|
||||
$(filter $(m),$(call suffix-search,$(m),$(2))) $(call real-search,$(filter-out $(m),$(call suffix-search,$(m),$(2))),$(2)),\
|
||||
$(m)))
|
||||
real-obj-y := $(call real-search, $(obj-y),-objs -y)
|
||||
real-obj-m := $(call real-search, $(obj-m),-objs -y -m)
|
||||
|
||||
always-y += $(always-m)
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user