tvm.meta_schedule¶
Package tvm.meta_schedule. The meta schedule infrastructure.
Classes:
The abstract builder interface. |
|
Cost model. |
|
The abstract database interface. |
|
|
A tuning task extracted from the high-level IR |
Extractor for features from measure candidates for use in cost model. |
|
Rules to apply after measure results is available. |
|
Mutator is designed to mutate the trace to explore the design space. |
|
Rules to apply a postprocessor to a schedule. |
|
|
Tuning time profiler. |
The abstract runner interface |
|
Rules to modify a block in a schedule. |
|
|
Measure candidate class. |
Search strategy is the class that generates the measure candidates. |
|
The abstract design space generator interface. |
|
The abstract task scheduler interface. |
|
|
The tune context class is designed to contain all resources for a tuning task. |
Functions:
Return whether the meta-schedule is enabled. |
|
|
Tune a TIR function or an IRModule of TIR functions. |
|
Tune a list of tasks. |
|
A decorator to register derived subclasses for TVM objects. |
- class tvm.meta_schedule.Builder¶
The abstract builder interface.
Methods:
build
(build_inputs)Build the given inputs.
create
([kind])Create a Builder.
- build(build_inputs: List[tvm.meta_schedule.builder.builder.BuilderInput]) List[tvm.meta_schedule.builder.builder.BuilderResult] ¶
Build the given inputs.
- Parameters
build_inputs (List[BuilderInput]) – The inputs to be built.
- Returns
build_results – The results of building the given inputs.
- Return type
List[BuilderResult]
- static create(kind: Literal['local'] = 'local', *args, **kwargs) tvm.meta_schedule.builder.builder.Builder ¶
Create a Builder.
- Parameters
kind (Literal["local"]) – The kind of the builder. For now, only “local” is supported.
- Returns
builder – The builder created.
- Return type
- class tvm.meta_schedule.CostModel¶
Cost model.
Methods:
load
(path)Load the cost model from given file location.
save
(path)Save the cost model to given file location.
update
(context, candidates, results)Update the cost model given running results.
predict
(context, candidates)Predict normalized score with the cost model.
create
(kind, *args, **kwargs)Create a CostModel.
- load(path: str) None ¶
Load the cost model from given file location.
- Parameters
path (str) – The file path.
- save(path: str) None ¶
Save the cost model to given file location.
- Parameters
path (str) – The file path.
- update(context: tvm.meta_schedule.tune_context.TuneContext, candidates: List[tvm.meta_schedule.search_strategy.search_strategy.MeasureCandidate], results: List[tvm.meta_schedule.runner.runner.RunnerResult]) None ¶
Update the cost model given running results.
- Parameters
context (TuneContext,) – The tuning context.
candidates (List[MeasureCandidate]) – The measure candidates.
results (List[RunnerResult]) – The running results of the measure candidates.
- predict(context: tvm.meta_schedule.tune_context.TuneContext, candidates: List[tvm.meta_schedule.search_strategy.search_strategy.MeasureCandidate]) numpy.ndarray ¶
Predict normalized score with the cost model.
- Parameters
context (TuneContext,) – The tuning context.
candidates (List[MeasureCandidate]) – The measure candidates.
- Returns
result – The predicted normalized score.
- Return type
np.ndarray
- static create(kind: Literal['xgb', 'mlp', 'random', 'none'], *args, **kwargs) tvm.meta_schedule.cost_model.cost_model.CostModel ¶
Create a CostModel.
- Parameters
kind (Literal["xgb", "mlp", "random", "none"]) – The kind of the cost model. Can be “xgb”, “mlp”, “random” or “none”.
- Returns
cost_model – The created cost model.
- Return type
- class tvm.meta_schedule.Database¶
The abstract database interface.
Methods:
has_workload
(mod)Check if the database has the given workload.
commit_workload
(mod)Commit a workload to the database if missing.
commit_tuning_record
(record)Commit a tuning record to the database.
get_top_k
(workload, top_k)Get the top K valid tuning records of given workload from the database.
Get all the tuning records from the database.
query_tuning_record
(mod, target, workload_name)Query the best record of the given workload from the database.
query_schedule
(mod, target, workload_name)Query the best schedule of the given workload from the database.
query_ir_module
(mod, target, workload_name)Query the best IRModule of the given workload from the database.
dump_pruned
(destination)Dump the pruned database to files of JSONDatabase format.
query
(mod, target, *[, workload_name, kind])Query the database to retrieve the best optimization outcome of the given workload.
current
()Get the current database under scope.
create
([kind])Create a Database.
- has_workload(mod: tvm.ir.module.IRModule) bool ¶
Check if the database has the given workload. :param mod: The IRModule to be searched for. :type mod: IRModule
- Returns
result – Whether the database has the given workload.
- Return type
- commit_workload(mod: tvm.ir.module.IRModule) tvm.meta_schedule.database.database.Workload ¶
Commit a workload to the database if missing.
- commit_tuning_record(record: tvm.meta_schedule.database.database.TuningRecord) None ¶
Commit a tuning record to the database.
- Parameters
record (TuningRecord) – The tuning record to add.
- get_top_k(workload: tvm.meta_schedule.database.database.Workload, top_k: int) List[tvm.meta_schedule.database.database.TuningRecord] ¶
Get the top K valid tuning records of given workload from the database.
- get_all_tuning_records() List[tvm.meta_schedule.database.database.TuningRecord] ¶
Get all the tuning records from the database.
- Returns
tuning_records – All tuning records from the database.
- Return type
List[TuningRecord]
- query_tuning_record(mod: tvm.ir.module.IRModule, target: tvm.target.target.Target, workload_name: str) Optional[tvm.meta_schedule.database.database.TuningRecord] ¶
Query the best record of the given workload from the database.
- query_schedule(mod: tvm.ir.module.IRModule, target: tvm.target.target.Target, workload_name: str) Optional[tvm.tir.schedule.schedule.Schedule] ¶
Query the best schedule of the given workload from the database.
- Parameters
- Returns
schedule – The best schedule of the given workload; None if not found.
- Return type
Optional[tvm.tir.Schedule]
- query_ir_module(mod: tvm.ir.module.IRModule, target: tvm.target.target.Target, workload_name: str) Optional[tvm.ir.module.IRModule] ¶
Query the best IRModule of the given workload from the database.
- dump_pruned(destination: tvm.meta_schedule.database.database.Database) None ¶
Dump the pruned database to files of JSONDatabase format.
- Parameters
destination (Database) – The destination database to be dumped to.
- query(mod: tvm.ir.module.IRModule, target: tvm.target.target.Target, *, workload_name: str = 'main', kind: Union[Literal['schedule'], Literal['record'], Literal['ir_module']] = 'schedule') Union[tvm.tir.schedule.schedule.Schedule, tvm.ir.module.IRModule, tvm.meta_schedule.database.database.TuningRecord] ¶
Query the database to retrieve the best optimization outcome of the given workload.
- Parameters
- Returns
result – The best optimization outcome of the given workload.
- Return type
Union[tvm.tir.Schedule, IRModule, TuningRecord]
- static current() Optional[tvm.meta_schedule.database.database.Database] ¶
Get the current database under scope.
- static create(kind: Union[Literal['json', 'memory', 'union', 'ordered_union'], Callable[[tvm.tir.schedule.schedule.Schedule], bool]] = 'json', *args, **kwargs) tvm.meta_schedule.database.database.Database ¶
Create a Database.
- Parameters
kind (str = "json" | "memory" | "union" | "ordered_union" | Callable[[tvm.tir.Schedule],) –
bool] – The kind of the database to be created. The following kinds are supported: “json”, “memory”, “union”, “ordered_union”, and a custom schedule function.
- Returns
database – The created database.
- Return type
- class tvm.meta_schedule.ExtractedTask(task_name: str, mod: tvm.ir.module.IRModule, target: tvm.target.target.Target, dispatched: List[tvm.ir.module.IRModule], weight: int)¶
A tuning task extracted from the high-level IR
- class tvm.meta_schedule.FeatureExtractor¶
Extractor for features from measure candidates for use in cost model.
Methods:
extract_from
(context, candidates)Extract features from the given measure candidate.
create
(kind, *args, **kwargs)Create a CostModel.
- extract_from(context: tvm.meta_schedule.tune_context.TuneContext, candidates: List[tvm.meta_schedule.search_strategy.search_strategy.MeasureCandidate]) List[tvm.runtime.ndarray.NDArray] ¶
Extract features from the given measure candidate.
- Parameters
context (TuneContext) – The tuning context for feature extraction.
candidates (List[MeasureCandidate]) – The measure candidates to extract features from.
- Returns
features – The feature tvm ndarray extracted.
- Return type
List[NDArray]
- static create(kind: Literal['per-store-feature'], *args, **kwargs) tvm.meta_schedule.feature_extractor.feature_extractor.FeatureExtractor ¶
Create a CostModel.
- class tvm.meta_schedule.MeasureCallback¶
Rules to apply after measure results is available.
Methods:
apply
(task_scheduler, task_id, ...)Apply a measure callback to the given schedule.
create
(kind)Create a list of measure callbacks.
- apply(task_scheduler: TaskScheduler, task_id: int, measure_candidates: List[tvm.meta_schedule.search_strategy.search_strategy.MeasureCandidate], builder_results: List[tvm.meta_schedule.builder.builder.BuilderResult], runner_results: List[tvm.meta_schedule.runner.runner.RunnerResult]) None ¶
Apply a measure callback to the given schedule.
- Parameters
task_scheduler (TaskScheduler) – The task scheduler.
task_id (int) – The task id.
measure_candidates (List[MeasureCandidate]) – The measure candidates.
builder_results (List[BuilderResult]) – The builder results by building the measure candidates.
runner_results (List[RunnerResult]) – The runner results by running the built measure candidates.
- static create(kind: Literal['default']) List[tvm.meta_schedule.measure_callback.measure_callback.MeasureCallback] ¶
Create a list of measure callbacks.
- class tvm.meta_schedule.Mutator¶
Mutator is designed to mutate the trace to explore the design space.
Methods:
apply
(trace)Apply the mutator function to the given trace.
clone
()Clone the mutator.
create
(kind)Create a list of default mutators.
- apply(trace: tvm.tir.schedule.trace.Trace) Optional[tvm.tir.schedule.trace.Trace] ¶
Apply the mutator function to the given trace.
- Parameters
trace (Trace) – The given trace for mutation.
- Returns
trace – None if mutator failed, otherwise return the mutated trace.
- Return type
Optional[Trace]
- clone() tvm.meta_schedule.mutator.mutator.Mutator ¶
Clone the mutator.
- Returns
mutator – The cloned mutator.
- Return type
- static create(kind: Literal['llvm', 'cuda', 'cuda-tensorcore', 'hexagon']) Dict[tvm.meta_schedule.mutator.mutator.Mutator, float] ¶
Create a list of default mutators.
- Parameters
kind (Literal["llvm", "cuda", "cuda-tensorcore", "hexagon"]) – The kind of mutators.
- Returns
mutators – The list of mutators.
- Return type
List[Mutator]
- class tvm.meta_schedule.Postproc¶
Rules to apply a postprocessor to a schedule.
Methods:
apply
(sch)Apply a postprocessor to the given schedule.
clone
()Clone the postprocessor.
create
(kind)Create a list of default postprocessors.
- apply(sch: tvm.tir.schedule.schedule.Schedule) bool ¶
Apply a postprocessor to the given schedule.
- Parameters
sch (tvm.tir.Schedule) – The schedule to be post processed.
- Returns
result – Whether the postprocessor was successfully applied.
- Return type
- clone() tvm.meta_schedule.postproc.postproc.Postproc ¶
Clone the postprocessor.
- Returns
cloned_postproc – The cloned postprocessor.
- Return type
- static create(kind: Literal['llvm', 'cuda', 'cuda-tensorcore', 'hexagon']) List[tvm.meta_schedule.postproc.postproc.Postproc] ¶
Create a list of default postprocessors.
- Parameters
kind (Literal["llvm", "cuda", "cuda-tensorcore", "hexagon"]) – The kind of the postprocessors.
- Returns
postprocs – The list of postprocessors.
- Return type
List[Mutator]
- class tvm.meta_schedule.Profiler¶
Tuning time profiler.
Methods:
get
()Get the profiling results in seconds
table
()Get the profiling results in a table format
current
()Get the current profiler.
timeit
(name)Timeit a block of code
- static current() Optional[tvm.meta_schedule.profiler.Profiler] ¶
Get the current profiler.
- tvm.meta_schedule.is_meta_schedule_enabled() bool ¶
Return whether the meta-schedule is enabled.
- Returns
enabled – Whether the meta schedule is enabled
- Return type
- class tvm.meta_schedule.Runner¶
The abstract runner interface
Methods:
run
(runner_inputs)Run the built artifact and get runner futures.
create
([kind])Create a Runner.
- run(runner_inputs: List[tvm.meta_schedule.runner.runner.RunnerInput]) List[tvm.meta_schedule.runner.runner.RunnerFuture] ¶
Run the built artifact and get runner futures.
- Parameters
runner_inputs (List[RunnerInput]) – The inputs to the runner.
- Returns
runner_futures – The runner futures.
- Return type
List[RunnerFuture]
- static create(kind: Literal['local', 'rpc'] = 'local', *args, **kwargs) tvm.meta_schedule.runner.runner.Runner ¶
Create a Runner.
- class tvm.meta_schedule.ScheduleRule¶
Rules to modify a block in a schedule.
Methods:
apply
(sch, block)Apply a schedule rule to the specific block in the given schedule.
clone
()Deep clone the schedule rule.
create
(kind)Create a list of schedule rules for the given kind.
- apply(sch: tvm.tir.schedule.schedule.Schedule, block: tvm.tir.schedule.schedule.BlockRV) List[tvm.tir.schedule.schedule.Schedule] ¶
Apply a schedule rule to the specific block in the given schedule.
- Parameters
sch (tvm.tir.Schedule) – The schedule to be modified.
block (BlockRV) – The specific block to apply the schedule rule.
- Returns
design_spaces – The list of schedules generated by applying the schedule rule.
- Return type
List[tvm.tir.Schedule]
- clone() tvm.meta_schedule.schedule_rule.schedule_rule.ScheduleRule ¶
Deep clone the schedule rule.
- Returns
cloned_rule – The cloned schedule rule.
- Return type
- static create(kind: Literal['llvm', 'cuda', 'cuda-tensorcore', 'hexagon']) List[tvm.meta_schedule.schedule_rule.schedule_rule.ScheduleRule] ¶
Create a list of schedule rules for the given kind.
- Parameters
kind (Literal["llvm", "cuda", "cuda-tensorcore", "hexagon"]) – The kind of the schedule rules.
- Returns
rules – The list of schedule rules.
- Return type
List[ScheduleRule]
- class tvm.meta_schedule.MeasureCandidate(sch: tvm.tir.schedule.schedule.Schedule, args_info: List[tvm.meta_schedule.arg_info.ArgInfo])¶
Measure candidate class.
- Parameters
sch (tvm.tir.Schedule) – The schedule to be measured.
args_info (List[ArgInfo]) – The argument information.
- class tvm.meta_schedule.SearchStrategy¶
Search strategy is the class that generates the measure candidates.
Methods:
pre_tuning
(max_trials, num_trials_per_iter, ...)Pre-tuning for the search strategy.
Post-tuning for the search strategy.
Generate measure candidates from design spaces for measurement.
notify_runner_results
(measure_candidates, ...)Update the search strategy with profiling results.
clone
()Clone the search strategy.
create
([kind])Create a search strategy.
- pre_tuning(max_trials: int, num_trials_per_iter: int, design_spaces: List[tvm.tir.schedule.schedule.Schedule], database: Optional[Database] = None, cost_model: Optional[CostModel] = None) None ¶
Pre-tuning for the search strategy.
- Parameters
max_trials (int) – The maximum number of trials.
num_trials_per_iter (int) – The number of trials per iteration.
design_spaces (List[tvm.tir.Schedule]) – The design spaces used during tuning process.
database (Optional[Database] = None) – The database used during tuning process.
cost_model (Optional[CostModel] = None) – The cost model used during tuning process.
- generate_measure_candidates() Optional[List[tvm.meta_schedule.search_strategy.search_strategy.MeasureCandidate]] ¶
Generate measure candidates from design spaces for measurement.
- Returns
measure_candidates – The measure candidates generated, None if finished.
- Return type
Optional[List[IRModule]]
- notify_runner_results(measure_candidates: List[tvm.meta_schedule.search_strategy.search_strategy.MeasureCandidate], results: List[tvm.meta_schedule.runner.runner.RunnerResult]) None ¶
Update the search strategy with profiling results.
- Parameters
measure_candidates (List[MeasureCandidate]) – The measure candidates for update.
results (List[RunnerResult]) – The profiling results from the runner.
- clone() tvm.meta_schedule.search_strategy.search_strategy.SearchStrategy ¶
Clone the search strategy.
- Returns
cloned – The cloned search strategy.
- Return type
- static create(kind: Literal['evolutionary', 'replay-trace', 'replay-func'] = 'evolutionary', *args, **kwargs) tvm.meta_schedule.search_strategy.search_strategy.SearchStrategy ¶
Create a search strategy.
- class tvm.meta_schedule.SpaceGenerator¶
The abstract design space generator interface.
Methods:
Generate design spaces given a module.
clone
()Clone the design space generator.
create
([kind])Create a design space generator.
- generate_design_space(mod: tvm.ir.module.IRModule) List[tvm.tir.schedule.schedule.Schedule] ¶
Generate design spaces given a module.
- Parameters
mod (IRModule) – The module used for design space generation.
- Returns
design_spaces – The generated design spaces, i.e., schedules.
- Return type
List[tvm.tir.Schedule]
- clone() tvm.meta_schedule.space_generator.space_generator.SpaceGenerator ¶
Clone the design space generator.
- Returns
cloned_sg – The cloned design space generator.
- Return type
- static create(kind: Union[Literal['post-order-apply', 'union'], Callable[[tvm.tir.schedule.schedule.Schedule], None], Callable[[tvm.tir.schedule.schedule.Schedule], tvm.tir.schedule.schedule.Schedule], Callable[[tvm.tir.schedule.schedule.Schedule], List[tvm.tir.schedule.schedule.Schedule]]] = 'post-order-apply', *args, **kwargs) tvm.meta_schedule.space_generator.space_generator.SpaceGenerator ¶
Create a design space generator.
- class tvm.meta_schedule.TaskScheduler¶
The abstract task scheduler interface.
Methods:
Fetch the next task id.
join_running_task
(task_id)Wait until the task is finished.
tune
(tasks, task_weights, max_trials_global, ...)Auto-tuning.
terminate_task
(task_id)Terminate the task
touch_task
(task_id)Touch the task and update its status
Print out a human-readable format of the tuning statistics.
create
([kind])Create a task scheduler.
- join_running_task(task_id: int) List[tvm.meta_schedule.runner.runner.RunnerResult] ¶
Wait until the task is finished.
- Parameters
task_id (int) – The task id to be joined.
- Returns
results – The list of results.
- Return type
List[RunnerResult]
- tune(tasks: List[tvm.meta_schedule.tune_context.TuneContext], task_weights: List[float], max_trials_global: int, max_trials_per_task: int, num_trials_per_iter: int, builder: tvm.meta_schedule.builder.builder.Builder, runner: tvm.meta_schedule.runner.runner.Runner, measure_callbacks: List[tvm.meta_schedule.measure_callback.measure_callback.MeasureCallback], database: Optional[tvm.meta_schedule.database.database.Database], cost_model: Optional[tvm.meta_schedule.cost_model.cost_model.CostModel]) None ¶
Auto-tuning.
- Parameters
tasks (List[TuneContext]) – The list of tuning contexts as tasks.
task_weights (List[float]) – The list of task weights.
max_trials_global (int) – The maximum number of trials globally.
max_trials_per_task (int) – The maximum number of trials per task.
num_trials_per_iter (int) – The number of trials per iteration.
builder (Builder) – The builder.
runner (Runner) – The runner.
measure_callbacks (List[MeasureCallback]) – The list of measure callbacks.
database (Optional[Database]) – The database.
cost_model (Optional[CostModel]) – The cost model.
- terminate_task(task_id: int) None ¶
Terminate the task
- Parameters
task_id (int) – The task id to be terminated.
- touch_task(task_id: int) None ¶
Touch the task and update its status
- Parameters
task_id (int) – The task id to be checked.
- static create(kind: Literal['round-robin', 'gradient'] = 'gradient', *args, **kwargs) tvm.meta_schedule.task_scheduler.task_scheduler.TaskScheduler ¶
Create a task scheduler.
- tvm.meta_schedule.tune_tir(mod: Union[tvm.ir.module.IRModule, tvm.tir.function.PrimFunc], target: Union[str, tvm.target.target.Target], work_dir: str, max_trials_global: int, *, max_trials_per_task: Optional[int] = None, num_trials_per_iter: int = 64, builder: Union[tvm.meta_schedule.builder.builder.Builder, Literal['local']] = 'local', runner: Union[tvm.meta_schedule.runner.runner.Runner, Literal['local', 'rpc']] = 'local', database: Union[tvm.meta_schedule.database.database.Database, Literal['json', 'memory']] = 'json', cost_model: Union[tvm.meta_schedule.cost_model.cost_model.CostModel, Literal['xgb', 'mlp', 'random']] = 'xgb', measure_callbacks: Union[List[tvm.meta_schedule.measure_callback.measure_callback.MeasureCallback], tvm.meta_schedule.measure_callback.measure_callback.MeasureCallback, Literal['default']] = 'default', task_scheduler: Union[tvm.meta_schedule.task_scheduler.task_scheduler.TaskScheduler, Literal['gradient', 'round-robin']] = 'gradient', space: Union[tvm.meta_schedule.space_generator.space_generator.SpaceGenerator, Callable[[tvm.tir.schedule.schedule.Schedule], None], Callable[[tvm.tir.schedule.schedule.Schedule], tvm.tir.schedule.schedule.Schedule], Callable[[tvm.tir.schedule.schedule.Schedule], List[tvm.tir.schedule.schedule.Schedule]], Literal['post-order-apply', 'union']] = 'post-order-apply', strategy: Union[tvm.meta_schedule.search_strategy.search_strategy.SearchStrategy, Literal['replay-func', 'replay-trace', 'evolutionary']] = 'evolutionary', num_tuning_cores: Union[Literal['physical', 'logical'], int] = 'physical', seed: Optional[int] = None, module_equality: str = 'structural', special_space: Optional[Mapping[str, Union[tvm.meta_schedule.space_generator.space_generator.SpaceGenerator, Callable[[tvm.tir.schedule.schedule.Schedule], None], Callable[[tvm.tir.schedule.schedule.Schedule], tvm.tir.schedule.schedule.Schedule], Callable[[tvm.tir.schedule.schedule.Schedule], List[tvm.tir.schedule.schedule.Schedule]], Literal['post-order-apply', 'union']]]] = None) tvm.meta_schedule.database.database.Database ¶
Tune a TIR function or an IRModule of TIR functions.
- Parameters
mod (Union[ir.IRModule, tir.PrimFunc]) – The TIR IRModule to tune.
work_dir (str) – The working directory.
max_trials_global (int) – The maximum number of trials to run globally.
max_trials_per_task (Optional[int]) – The maximum number of trials to run per task.
num_trials_per_iter (int) – The number of trials to run per iteration
builder (Builder.BuilderType) – The builder.
runner (Runner.RunnerType) – The runner.
database (Database.DatabaseType) – The database.
cost_model (CostModel.CostModelType) – The cost model.
measure_callbacks (MeasureCallback.CallbackListType) – The measure callbacks.
task_scheduler (TaskScheduler.TaskSchedulerType) – The task scheduler.
space (SpaceGenerator.SpaceGeneratorType) – The space generator.
strategy (SearchStrategy.SearchStrategyType) – The search strategy.
num_tuning_cores (Union[Literal["physical", "logical"], int]) – The number of CPU cores to use during tuning.
seed (Optional[int]) – The seed for the random number generator.
module_equality (Optional[str]) – A string to specify the module equality testing and hashing method.
special_space (Optional[Mapping[str, SpaceGenerator.SpaceGeneratorType]]) – A mapping from task name to a special space generator for that task.
- Returns
database – The database with all tuning records
- Return type
- tvm.meta_schedule.tune_tasks(*, tasks: List[tvm.meta_schedule.tune_context.TuneContext], task_weights: List[float], work_dir: str, max_trials_global: int, max_trials_per_task: Optional[int] = None, num_trials_per_iter: int = 64, builder: Union[tvm.meta_schedule.builder.builder.Builder, Literal['local']] = 'local', runner: Union[tvm.meta_schedule.runner.runner.Runner, Literal['local', 'rpc']] = 'local', database: Union[tvm.meta_schedule.database.database.Database, Literal['json', 'memory']] = 'json', cost_model: Union[tvm.meta_schedule.cost_model.cost_model.CostModel, Literal['xgb', 'mlp', 'random']] = 'xgb', measure_callbacks: Union[List[tvm.meta_schedule.measure_callback.measure_callback.MeasureCallback], tvm.meta_schedule.measure_callback.measure_callback.MeasureCallback, Literal['default']] = 'default', task_scheduler: Union[tvm.meta_schedule.task_scheduler.task_scheduler.TaskScheduler, Literal['gradient', 'round-robin']] = 'gradient', module_equality: str = 'structural') tvm.meta_schedule.database.database.Database ¶
Tune a list of tasks. Using a task scheduler.
- Parameters
tasks (List[TuneContext]) – The list of tasks to tune.
task_weights (List[float]) – The weight of each task.
work_dir (str) – The working directory.
max_trials_global (int) – The maximum number of trials to run globally.
max_trials_per_task (Optional[int]) – The maximum number of trials to run per task.
num_trials_per_iter (int) – The number of trials to run per iteration
builder (Builder.BuilderType) – The builder.
runner (Runner.RunnerType) – The runner.
database (Database.DatabaseType) – The database.
cost_model (CostModel.CostModelType) – The cost model.
measure_callbacks (MeasureCallback.CallbackListType) – The measure callbacks.
task_scheduler (TaskScheduler.TaskSchedulerType) – The task scheduler.
module_equality (Optional[str]) –
A string to specify the module equality testing and hashing method. It must be one of the followings:
”structural”: Use StructuralEqual/Hash
- ”ignore-ndarray”: Same as “structural”, but ignore ndarray raw data during equality
testing and hashing.
- ”anchor-block”: Apply equality testing and hashing on the anchor block extracted from
a given module. The “ignore-ndarray” varint is used for the extracted blocks or in case no anchor block is found. For the definition of the anchor block, see tir/analysis/analysis.py.
- Returns
database – The database with all tuning records
- Return type
- class tvm.meta_schedule.TuneContext(mod: Optional[tvm.ir.module.IRModule] = None, *, target: Optional[Union[tvm.target.target.Target, str]] = None, space_generator: Optional[SpaceGenerator.SpaceGeneratorType] = None, search_strategy: Optional[SearchStrategy.SearchStrategyType] = None, task_name: str = 'main', rand_state: int = - 1, num_threads: Union[int, Literal['physical', 'logical']] = 'physical', logger: Optional[logging.Logger] = None)¶
The tune context class is designed to contain all resources for a tuning task.
- Parameters
mod (Optional[IRModule] = None) – The workload to be optimized.
target (Optional[Target] = None) – The target to be optimized for.
space_generator (Union[None, ScheduleFnType, SpaceGenerator] = None) – The design space generator.
search_strategy (Union[None, SearchStrategy] = None) – The search strategy. if None, the strategy is left blank.
task_name (Optional[str] = None) – The name of the tuning task.
logger (logging.Logger) – The logger for the tuning task.
rand_state (int = -1) – The random state. Need to be in integer in [1, 2^31-1], -1 means using random number.
num_threads (int = None) – The number of threads to be used, None means using the logical cpu count.
Methods:
Generate design spaces given a module.
pre_tuning
(max_trials[, ...])A method to be called for SearchStrategy to do necessary preparation before tuning.
A method to be called for SearchStrategy to do necessary cleanup after tuning.
Generate a batch of measure candidates from design spaces for measurement.
notify_runner_results
(measure_candidates, ...)Update the state in SearchStrategy with profiling results.
clone
()Clone the TuneContext.
- generate_design_space() List[tvm.tir.schedule.schedule.Schedule] ¶
Generate design spaces given a module.
Delegated to self.space_generator.generate_design_space with self.mod
- Returns
design_spaces – The generated design spaces, i.e., schedules.
- Return type
List[tvm.tir.Schedule]
- pre_tuning(max_trials: int, num_trials_per_iter: int = 64, design_spaces: Optional[List[tvm.tir.schedule.schedule.Schedule]] = None, database: Optional[Database] = None, cost_model: Optional[CostModel] = None) None ¶
A method to be called for SearchStrategy to do necessary preparation before tuning.
Delegated to self.search_strategy.pre_tuning.
- Parameters
max_trials (int) – The maximum number of trials to be executed.
num_trials_per_iter (int = 64) – The number of trials to be executed per iteration.
design_spaces (Optional[List[tvm.tir.Schedule]]) – The design spaces used during tuning process. If None, use the outcome of self.generate_design_space().
database (Optional[Database] = None) – The database used during tuning process. If None, and the search strategy is EvolutionarySearch, then use tvm.meta_schedule.database.MemoryDatabase.
cost_model (Optional[CostModel] = None) – The cost model used during tuning process. If None, and the search strategy is EvolutionarySearch, then use tvm.meta_schedule.cost_model.RandomModel.
- post_tuning() None ¶
A method to be called for SearchStrategy to do necessary cleanup after tuning.
Delegated to self.search_strategy.post_tuning.
- generate_measure_candidates() Optional[List[MeasureCandidate]] ¶
Generate a batch of measure candidates from design spaces for measurement.
Delegated to self.search_strategy.generate_measure_candidates.
- Returns
measure_candidates – The measure candidates generated, None if search is finished.
- Return type
Optional[List[IRModule]]
- notify_runner_results(measure_candidates: List[MeasureCandidate], results: List[RunnerResult]) None ¶
Update the state in SearchStrategy with profiling results.
Delegated to self.search_strategy.notify_runner_results.
- Parameters
measure_candidates (List[MeasureCandidate]) – The measure candidates for update.
results (List[RunnerResult]) – The profiling results from the runner.
- clone() tvm.meta_schedule.tune_context.TuneContext ¶
Clone the TuneContext.
- Returns
cloned_context – The cloned TuneContext.
- Return type
- tvm.meta_schedule.derived_object(cls: type) type ¶
A decorator to register derived subclasses for TVM objects.
- Parameters
cls (type) – The derived class to be registered.
- Returns
cls – The decorated TVM object.
- Return type
Example
@register_object("meta_schedule.PyRunner") class _PyRunner(meta_schedule.Runner): def __init__(self, f_run: Callable = None): self.__init_handle_by_constructor__(_ffi_api.RunnerPyRunner, f_run) class PyRunner: _tvm_metadata = { "cls": _PyRunner, "methods": ["run"] } def run(self, runner_inputs): raise NotImplementedError @derived_object class LocalRunner(PyRunner): def run(self, runner_inputs): ...