Merge tag 'v1.24.1' of https://github.com/micropython/micropython into update/micropython_v1.24.1-usermod

Patch release for mpremote, rp2 IGMP, esp32 PWM, SDCard, and AP channel

This is a patch release containing the following commits:
- tools/mpremote: fix UnboundLocalError in Transport.fs_writefile()
- esp32/machine_pwm: use IDF functions to calculate resolution correctly
- pic16bit: make it build with recent XC16 versions
- py/objdeque: fix buffer overflow in deque_subscr
- extmod/modlwip: fix IGMP address type when IPv6 is enabled
- esp32/machine_pwm: restore PWM support for ESP-IDF v5.0.x and v5.1.x
- esp32: workaround native code execution crash on ESP32-S2
- tools/mpremote: make sure stdout and stderr output appear in order
- tools/mpremote: add test for forced copy
- tools/mpremote: support trailing slash on dest for non-recursive copy
- esp32/modsocket: fix getaddrinfo hints to set AI_CANONNAME
- extmod/vfs_blockdev: support bool return from Python read/write blocks
- extmod/network_cyw43: fix isconnected() result on AP interface
- extmod/network_cyw43: fix uninitialised variable in status('stations')
- extmod/network_cyw43: allow configuring active AP interface
- esp32: fix setting WLAN channel in AP mode
- esp32: use hardware version for touchpad macro defines
- esp32: fix machine.TouchPad startup on ESP32-S2 and S3
- extmod/modframebuf: fix 0 radius bug in FrameBuffer.ellipse
- nrf/drivers/ticker: reset slow ticker callback count on soft reboot
- py/objfloat: workaround non-constant NAN definition on Windows MSVC

# Conflicts:
#	.github/workflows/code_formatting.yml
#	.github/workflows/code_size.yml
#	.github/workflows/commit_formatting.yml
#	.github/workflows/docs.yml
#	.github/workflows/examples.yml
#	.github/workflows/mpremote.yml
#	.github/workflows/mpy_format.yml
#	.github/workflows/ports.yml
#	.github/workflows/ports_cc3200.yml
#	.github/workflows/ports_esp32.yml
#	.github/workflows/ports_esp8266.yml
#	.github/workflows/ports_mimxrt.yml
#	.github/workflows/ports_nrf.yml
#	.github/workflows/ports_powerpc.yml
#	.github/workflows/ports_renesas-ra.yml
#	.github/workflows/ports_rp2.yml
#	.github/workflows/ports_samd.yml
#	.github/workflows/ports_stm32.yml
#	.github/workflows/ports_unix.yml
#	.github/workflows/ports_webassembly.yml
#	.github/workflows/ports_windows.yml
#	.github/workflows/ports_zephyr.yml
#	README.md
#	extmod/extmod.mk
#	ports/esp32/.gitignore
#	ports/esp32/CMakeLists.txt
#	ports/esp32/boards/ESP32_GENERIC/mpconfigboard.cmake
#	ports/esp32/boards/sdkconfig.base
#	ports/esp32/main.c
#	ports/esp32/main/CMakeLists.txt
#	ports/stm32/Makefile
#	ports/stm32/boards/STM32F7DISC/mpconfigboard.mk
#	ports/stm32/boards/manifest.py
#	ports/stm32/boards/stm32f7xx_hal_conf_base.h
#	ports/unix/Makefile
#	ports/unix/variants/manifest.py
#	ports/unix/variants/mpconfigvariant_common.h
#	ports/windows/mpconfigport.h
#	py/makeqstrdata.py
#	py/qstr.c
#	py/qstr.h
#	tools/ci.sh
#	tools/makemanifest.py
#	tools/mpy-tool.py
This commit is contained in:
Gabor Peresztegi 2025-03-01 17:41:20 +01:00
commit 06ebf74c14
3563 changed files with 125839 additions and 65946 deletions

View File

@ -1,3 +1,21 @@
# all: Prune trailing whitespace.
dda9b9c6da5d3c31fa8769e581a753e95a270803
# all: Remove the "STATIC" macro and just use "static" instead.
decf8e6a8bb940d5829ca3296790631fcece7b21
# renesas-ra: Fix spelling mistakes found by codespell.
b3f2f18f927fa2fad10daf63d8c391331f5edf58
# all: Update Python formatting to ruff-format.
bbd8760bd9a2302e5abee29db279102bb11d7732
# all: Fix various spelling mistakes found by codespell 2.2.6.
cf490a70917a1b2d38ba9b58e763e0837d0f7ca7
# all: Fix spelling mistakes based on codespell check.
b1229efbd1509654dec6053865ab828d769e29db
# top: Update Python formatting to black "2023 stable style". # top: Update Python formatting to black "2023 stable style".
8b2748269244304854b3462cb8902952b4dcb892 8b2748269244304854b3462cb8902952b4dcb892

2
.gitattributes vendored
View File

@ -8,10 +8,12 @@
# These are binary so should never be modified by git. # These are binary so should never be modified by git.
*.a binary *.a binary
*.ico binary
*.png binary *.png binary
*.jpg binary *.jpg binary
*.dxf binary *.dxf binary
*.mpy binary *.mpy binary
*.der binary
# These should also not be modified by git. # These should also not be modified by git.
tests/basics/string_cr_conversion.py -text tests/basics/string_cr_conversion.py -text

View File

@ -1,25 +0,0 @@
---
name: Bug report
about: Report an issue
title: ''
labels: bug
assignees: ''
---
* Please search existing issues before raising a new issue. For questions about MicroPython or for help using MicroPython, or any sort of "how do I?" requests, please use the Discussions tab or raise a documentation request instead.
* In your issue, please include a clear and concise description of what the bug is, the expected output, and how to replicate it.
* If this issue involves external hardware, please include links to relevant datasheets and schematics.
* If you are seeing code being executed incorrectly, please provide a minimal example and expected output (e.g. comparison to CPython).
* For build issues, please include full details of your environment, compiler versions, command lines, and build output.
* Please provide as much information as possible about the version of MicroPython you're running, such as:
- firmware file name
- git commit hash and port/board
- version information shown in the REPL (hit Ctrl-B to see the startup message)
* Remove all placeholder text above before submitting.

109
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@ -0,0 +1,109 @@
name: Bug report
description: Report a bug or unexpected behaviour
labels: ["bug"]
body:
- type: markdown
attributes:
value: |
Please provide as much detail as you can, it really helps us find and fix bugs faster.
#### Not a bug report?
* If you have a question \"How Do I ...?\", please post it on [GitHub Discussions](https://github.com/orgs/micropython/discussions/) or [Discord](https://discord.gg/RB8HZSAExQ) instead of here.
* For missing or incorrect documentation, or feature requests, then please [choose a different issue type](https://github.com/micropython/micropython/issues/new/choose).
#### Existing issue?
* Please search for [existing issues](https://github.com/micropython/micropython/issues) matching this bug before reporting.
- type: input
id: port-board-hw
attributes:
label: Port, board and/or hardware
description: |
Which MicroPython port(s) and board(s) are you using?
placeholder: |
esp32 port, ESP32-Fantastic board.
validations:
required: true
- type: textarea
id: version
attributes:
label: MicroPython version
description: |
To find the version:
1. Open a serial REPL.
2. Type Ctrl-B to see the startup message.
3. Copy-paste that output here.
If the issue is about building MicroPython, please provide output of `git describe --dirty` and as much information as possible about the build environment.
If the version or configuration is modified from the official MicroPython releases or the master branch, please tell us the details of this as well.
placeholder: |
MicroPython v6.28.3 on 2029-01-23; PyBoard 9 with STM32F9
validations:
required: true
- type: textarea
id: steps-reproduce
attributes:
label: Reproduction
description: |
What steps will reproduce the problem? Please include all details that could be relevant about the environment, configuration, etc.
If there is Python code to reproduce this issue then please either:
a. Type it into a code block below ([code block guide](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks)), or
b. Post longer code to a [GitHub gist](https://gist.github.com/), or
c. Create a sample project on GitHub.
For build issues, please provide the exact build commands that you ran.
placeholder: |
1. Copy paste the code provided below into a new file
2. Use `mpremote run` to execute it on the board.
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected behaviour
description: |
What did you expect MicroPython to do? If comparing output with CPython or a different MicroPython port/version then please provide that output here.
placeholder: |
Expected to print "Hello World".
Here is the correct output, seen with previous MicroPython version v3.14.159:
> [...]
- type: textarea
id: what-happened
attributes:
label: Observed behaviour
description: |
What actually happened? Where possible please paste exact output, or the complete build log, etc. Very long output can be linked in a [GitHub gist](https://gist.github.com/).
placeholder: |
This unexpected exception appears:
> [...]
validations:
required: true
- type: textarea
id: additional
attributes:
label: Additional Information
description: |
Is there anything else that might help to resolve this issue?
value: No, I've provided everything above.
- type: dropdown
id: code-of-conduct
attributes:
label: Code of Conduct
description: |
Do you agree to follow the MicroPython [Code of Conduct](https://github.com/micropython/micropython/blob/master/CODEOFCONDUCT.md) to ensure a safe and respectful space for everyone?
options:
- "Yes, I agree"
multiple: true
validations:
required: true
- type: markdown
attributes:
value: |
Thanks for taking the time to help improve MicroPython.

View File

@ -1,16 +0,0 @@
---
name: Documentation issue
about: Report areas of the documentation or examples that need improvement
title: 'docs: '
labels: documentation
assignees: ''
---
* Please search existing issues before raising a new issue. For questions about MicroPython or for help using MicroPython, or any sort of "how do I?" requests, please use the Discussions tab instead.
* Describe what was missing from the documentation and/or what was incorrect/incomplete.
* If possible, please link to the relevant page on https://docs.micropython.org/
* Remove all placeholder text above before submitting.

View File

@ -0,0 +1,46 @@
name: Documentation issue
description: Report areas of the documentation or examples that need improvement
title: "docs: "
labels: ["documentation"]
body:
- type: markdown
attributes:
value: |
This form is for reporting issues with the documentation or examples provided with MicroPython.
If you have a general question \"How Do I ...?\", please post it on [GitHub Discussions](https://github.com/orgs/micropython/discussions/) or [Discord](https://discord.gg/RB8HZSAExQ) instead of here.
#### Existing issue?
* Please search for [existing issues](https://github.com/micropython/micropython/issues) before reporting a new one.
- type: input
id: page
attributes:
label: Documentation URL
description: |
Does this issue relate to a particular page in the [online documentation](https://docs.micropython.org/en/latest/)? If yes, please paste the URL of the page:
placeholder: |
https://docs.micropython.org/en/latest/
- type: textarea
id: version
attributes:
label: Description
description: |
Please describe what was missing from the documentation and/or what was incorrect/incomplete.
validations:
required: true
- type: dropdown
id: code-of-conduct
attributes:
label: Code of Conduct
description: |
Do you agree to follow the MicroPython [Code of Conduct](https://github.com/micropython/micropython/blob/master/CODEOFCONDUCT.md) to ensure a safe and respectful space for everyone?
options:
- "Yes, I agree"
multiple: true
validations:
required: true
- type: markdown
attributes:
value: |
Thanks for taking the time to help improve MicroPython.

View File

@ -1,24 +0,0 @@
---
name: Feature request
about: Request a feature or improvement
title: ''
labels: enhancement
assignees: ''
---
* Please search existing issues before raising a new issue. For questions about MicroPython or for help using MicroPython, or any sort of "how do I?" requests, please use the Discussions tab or raise a documentation request instead.
* Describe the feature you'd like to see added to MicroPython. In particular, what does this feature enable and why is it useful. MicroPython aims to strike a balance between functionality and code size, so please consider whether this feature can be optionally enabled and whether it can be provided in other ways (e.g. pure-Python library).
* For core Python features, where possible please include a link to the relevant PEP.
* For new architectures / ports / boards, please provide links to relevant documentation, specifications, and toolchains. Any information about the popularity and unique features about this hardware would also be useful.
* For features for existing ports (e.g. new peripherals or microcontroller features), please describe which port(s) it applies too, and whether this is could be an extension to the machine API or a port-specific module?
* For drivers (e.g. for external hardware), please link to datasheets and/or existing drivers from other sources.
* Who do you expect will implement the feature you are requesting? Would you be willing to sponsor this work?
* Remove all placeholder text above before submitting.

View File

@ -0,0 +1,74 @@
name: Feature request
description: Request a feature or improvement
labels: ['enhancement']
body:
- type: markdown
attributes:
value: |
This form is for requesting features or improvements in MicroPython.
#### Get feedback first
Before submitting a new feature idea here, suggest starting a discussion on [Discord](https://discord.gg/RB8HZSAExQ) or [GitHub Discussions](https://github.com/orgs/micropython/discussions/) to get early feedback from the community and maintainers.
#### Not a MicroPython core feature?
* If you have a question \"How Do I ...?\", please post it on GitHub Discussions or Discord instead of here.
* Could this feature be implemented as a pure Python library? If so, please open the request on the [micropython-lib repository](https://github.com/micropython/micropython-lib/issues) instead.
#### Existing issue?
* Please search for [existing issues](https://github.com/micropython/micropython/issues) before opening a new one.
- type: textarea
id: feature
attributes:
label: Description
description: |
Describe the feature you'd like to see added to MicroPython. What does this feature enable and why is it useful?
* For core Python features, where possible please include a link to the relevant PEP or CPython documentation.
* For new architectures / ports / boards, please provide links to relevant documentation, specifications, and toolchains. Any information about the popularity and unique features about this hardware would also be useful.
* For features for existing ports (e.g. new peripherals or microcontroller features), please describe which port(s) it applies to, and whether this is could be an extension to the machine API or a port-specific module?
* For drivers (e.g. for external hardware), please link to datasheets and/or existing drivers from other sources.
If there is an existing discussion somewhere about this feature, please add a link to it as well.
validations:
required: true
- type: textarea
id: size
attributes:
label: Code Size
description: |
MicroPython aims to strike a balance between functionality and code size. Can this feature be optionally enabled?
If you believe the usefulness of this feature would outweigh the additional code size, please explain. (It's OK to say you're unsure here, we're happy to discuss this with you.)
- type: dropdown
id: implementation
attributes:
label: Implementation
description: |
What is your suggestion for implementing this feature?
(See also: [How to sponsor](https://github.com/sponsors/micropython#sponsors), [How to submit a Pull Request](https://github.com/micropython/micropython/wiki/ContributorGuidelines).)
options:
- I hope the MicroPython maintainers or community will implement this feature
- I intend to implement this feature and would submit a Pull Request if desirable
- I would like to sponsor development of this feature
multiple: true
validations:
required: true
- type: dropdown
id: code-of-conduct
attributes:
label: Code of Conduct
description: |
Do you agree to follow the MicroPython [Code of Conduct](https://github.com/micropython/micropython/blob/master/CODEOFCONDUCT.md) to ensure a safe and respectful space for everyone?
options:
- "Yes, I agree"
multiple: true
validations:
required: true
- type: markdown
attributes:
value: |
Thanks for taking the time to suggest improvements for MicroPython.

View File

@ -1,16 +0,0 @@
---
name: Security report
about: Report a security issue or vunerability in MicroPython
title: ''
labels: security
assignees: ''
---
* If you need to raise this issue privately with the MicroPython team, please email contact@micropython.org instead.
* Include a clear and concise description of what the security issue is.
* What does this issue allow an attacker to do?
* Remove all placeholder text above before submitting.

60
.github/ISSUE_TEMPLATE/security.yml vendored Normal file
View File

@ -0,0 +1,60 @@
name: Security report
description: Report a security issue or vulnerability in MicroPython
labels: ["security"]
body:
- type: markdown
attributes:
value: |
This form is for reporting security issues in MicroPython that are not readily exploitable.
1. For issues that are readily exploitable or have high impact, please email contact@micropython.org instead.
1. If this is a question about security, please ask it in [Discussions](https://github.com/orgs/micropython/discussions/) or [Discord](https://discord.gg/RB8HZSAExQ) instead.
#### Existing issue?
* Please search for [existing issues](https://github.com/micropython/micropython/issues) before reporting a new one.
- type: input
id: port-board-hw
attributes:
label: Port, board and/or hardware
description: |
Which MicroPython port(s) and board(s) are you using?
placeholder: |
esp32 port, ESP32-Duper board.
- type: textarea
id: version
attributes:
label: MicroPython version
description: |
To find the version:
1. Open a serial REPL.
2. Type Ctrl-B to see the startup message.
3. Copy-paste that output here.
If the version or configuration is modified from the official MicroPython releases or the master branch, please tell us the details of this as well.
placeholder: |
MicroPython v6.28.3 on 2029-01-23; PyBoard 9 with STM32F9
- type: textarea
id: report
attributes:
label: Issue Report
description: |
Please provide a clear and concise description of the security issue.
* What does this issue allow an attacker to do?
* How does the attacker exploit this issue?
validations:
required: true
- type: dropdown
id: code-of-conduct
attributes:
label: Code of Conduct
description: |
Do you agree to follow the MicroPython [Code of Conduct](https://github.com/micropython/micropython/blob/master/CODEOFCONDUCT.md) to ensure a safe and respectful space for everyone?
options:
- "Yes, I agree"
multiple: true
validations:
required: true

33
.github/pull_request_template.md vendored Normal file
View File

@ -0,0 +1,33 @@
<!-- Thanks for submitting a Pull Request! We appreciate you spending the
time to improve MicroPython. Please provide enough information so that
others can review your Pull Request.
Before submitting, please read:
https://github.com/micropython/micropython/blob/master/CODEOFCONDUCT.md
https://github.com/micropython/micropython/wiki/ContributorGuidelines
Please check any CI failures that appear after your Pull Request is opened.
-->
### Summary
<!-- Explain the reason for making this change. What problem does the pull request
solve, or what improvement does it add? Add links if relevant. -->
### Testing
<!-- Explain what testing you did, and on which boards/ports. If there are
boards or ports that you couldn't test, please mention this here as well.
If you leave this empty then your Pull Request may be closed. -->
### Trade-offs and Alternatives
<!-- If the Pull Request has some negative impact (i.e. increased code size)
then please explain why you think the trade-off improvement is worth it.
If you can think of alternative ways to do this, please explain that here too.
Delete this heading if not relevant (i.e. small fixes) -->

16
.github/workflows/biome.yml vendored Normal file
View File

@ -0,0 +1,16 @@
name: JavaScript code lint and formatting with Biome
on: [push, pull_request]
jobs:
eslint:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Biome
uses: biomejs/setup-biome@v2
with:
version: 1.5.3
- name: Run Biome
run: biome ci --indent-style=space --indent-width=4 tests/ ports/webassembly

20
.github/workflows/code_formatting.yml vendored Normal file
View File

@ -0,0 +1,20 @@
name: Check code formatting
on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
code-formatting:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- name: Install packages
run: source tools/ci.sh && ci_c_code_formatting_setup
- name: Run code formatting
run: source tools/ci.sh && ci_c_code_formatting_run
- name: Check code formatting
run: git diff --exit-code

51
.github/workflows/code_size.yml vendored Normal file
View File

@ -0,0 +1,51 @@
name: Check code size
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'ports/bare-arm/**'
- 'ports/mimxrt/**'
- 'ports/minimal/**'
- 'ports/rp2/**'
- 'ports/samd/**'
- 'ports/stm32/**'
- 'ports/unix/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 100
- name: Install packages
run: source tools/ci.sh && ci_code_size_setup
- name: Build
run: source tools/ci.sh && ci_code_size_build
- name: Compute code size difference
run: tools/metrics.py diff ~/size0 ~/size1 | tee diff
- name: Save PR number
if: github.event_name == 'pull_request'
env:
PR_NUMBER: ${{ github.event.number }}
run: echo $PR_NUMBER > pr_number
- name: Upload diff
if: github.event_name == 'pull_request'
uses: actions/upload-artifact@v4
with:
name: code-size-report
path: |
diff
pr_number
retention-days: 1

View File

@ -11,11 +11,11 @@ concurrency:
jobs: jobs:
comment: comment:
runs-on: ubuntu-20.04 runs-on: ubuntu-22.04
steps: steps:
- name: 'Download artifact' - name: 'Download artifact'
id: download-artifact id: download-artifact
uses: actions/github-script@v6 uses: actions/github-script@v7
with: with:
result-encoding: string result-encoding: string
script: | script: |
@ -56,7 +56,7 @@ jobs:
run: unzip code-size-report.zip run: unzip code-size-report.zip
- name: Post comment to pull request - name: Post comment to pull request
if: steps.download-artifact.outputs.result == 'ok' if: steps.download-artifact.outputs.result == 'ok'
uses: actions/github-script@v6 uses: actions/github-script@v7
with: with:
github-token: ${{secrets.GITHUB_TOKEN}} github-token: ${{secrets.GITHUB_TOKEN}}
script: | script: |

13
.github/workflows/codespell.yml vendored Normal file
View File

@ -0,0 +1,13 @@
name: Check spelling with codespell
on: [push, pull_request]
jobs:
codespell:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# codespell version should be kept in sync with .pre-commit-config.yml
- run: pip install --user codespell==2.2.6 tomli
- run: codespell

18
.github/workflows/commit_formatting.yml vendored Normal file
View File

@ -0,0 +1,18 @@
name: Check commit message formatting
on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: '100'
- uses: actions/setup-python@v5
- name: Check commit message formatting
run: source tools/ci.sh && ci_commit_formatting_run

23
.github/workflows/docs.yml vendored Normal file
View File

@ -0,0 +1,23 @@
name: Build docs
on:
push:
pull_request:
paths:
- docs/**
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- name: Install Python packages
run: pip install -r docs/requirements.txt
- name: Build docs
run: make -C docs/ html

25
.github/workflows/examples.yml vendored Normal file
View File

@ -0,0 +1,25 @@
name: Check examples
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'examples/**'
- 'ports/unix/**'
- 'py/**'
- 'shared/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
embedding:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build
run: make -C examples/embedding -f micropython_embed.mk && make -C examples/embedding
- name: Run
run: ./examples/embedding/embed | grep "hello world"

29
.github/workflows/mpremote.yml vendored Normal file
View File

@ -0,0 +1,29 @@
name: Package mpremote
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
# Setting this to zero means fetch all history and tags,
# which hatch-vcs can use to discover the version tag.
fetch-depth: 0
- uses: actions/setup-python@v5
- name: Install build tools
run: pip install build
- name: Build mpremote wheel
run: cd tools/mpremote && python -m build --wheel
- name: Archive mpremote wheel
uses: actions/upload-artifact@v4
with:
name: mpremote
path: |
tools/mpremote/dist/mpremote*.whl

24
.github/workflows/mpy_format.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: .mpy file format and tools
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'examples/**'
- 'tests/**'
- 'tools/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-20.04 # use 20.04 to get python2
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_mpy_format_setup
- name: Test mpy-tool.py
run: source tools/ci.sh && ci_mpy_format_test

22
.github/workflows/ports.yml vendored Normal file
View File

@ -0,0 +1,22 @@
name: Build ports metadata
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- ports/**
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build ports download metadata
run: mkdir boards && ./tools/autobuild/build-downloads.py . ./boards

28
.github/workflows/ports_cc3200.yml vendored Normal file
View File

@ -0,0 +1,28 @@
name: cc3200 port
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'drivers/**'
- 'ports/cc3200/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_cc3200_setup
- name: Build
run: source tools/ci.sh && ci_cc3200_build

View File

@ -18,24 +18,40 @@ concurrency:
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
build_idf402: build_idf:
strategy:
fail-fast: false
matrix:
ci_func: # names are functions in ci.sh
- esp32_build_cmod_spiram_s2
- esp32_build_s3_c3
runs-on: ubuntu-20.04 runs-on: ubuntu-20.04
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
- name: Initialize lv_bindings submodule
run: git submodule update --init --recursive lib/lv_bindings
- name: Install packages
run: source tools/ci.sh && ci_esp32_idf402_setup
- name: Build
run: source tools/ci.sh && ci_esp32_build
build_idf44: - id: idf_ver
runs-on: ubuntu-20.04 name: Read the ESP-IDF version
steps: run: source tools/ci.sh && echo "IDF_VER=$IDF_VER" | tee "$GITHUB_OUTPUT"
- uses: actions/checkout@v3
- name: Initialize lv_bindings submodule - name: Cached ESP-IDF install
run: git submodule update --init --recursive lib/lv_bindings id: cache_esp_idf
- name: Install packages uses: actions/cache@v4
run: source tools/ci.sh && ci_esp32_idf44_setup with:
- name: Build path: |
run: source tools/ci.sh && ci_esp32_build ./esp-idf/
~/.espressif/
!~/.espressif/dist/
~/.cache/pip/
key: esp-idf-${{ steps.idf_ver.outputs.IDF_VER }}
- name: Install ESP-IDF packages
if: steps.cache_esp_idf.outputs.cache-hit != 'true'
run: source tools/ci.sh && ci_esp32_idf_setup
- name: ccache
uses: hendrikmuhs/ccache-action@v1.2
with:
key: esp32-${{ matrix.ci_func }}
- name: Build ci_${{matrix.ci_func }}
run: source tools/ci.sh && ci_${{ matrix.ci_func }}

28
.github/workflows/ports_esp8266.yml vendored Normal file
View File

@ -0,0 +1,28 @@
name: esp8266 port
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'drivers/**'
- 'ports/esp8266/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_esp8266_setup && ci_esp8266_path >> $GITHUB_PATH
- name: Build
run: source tools/ci.sh && ci_esp8266_build

33
.github/workflows/ports_mimxrt.yml vendored Normal file
View File

@ -0,0 +1,33 @@
name: mimxrt port
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'drivers/**'
- 'ports/mimxrt/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-20.04
defaults:
run:
working-directory: 'micropython repo' # test build with space in path
steps:
- uses: actions/checkout@v4
with:
path: 'micropython repo'
- name: Install packages
run: source tools/ci.sh && ci_mimxrt_setup
- name: Build
run: source tools/ci.sh && ci_mimxrt_build

28
.github/workflows/ports_nrf.yml vendored Normal file
View File

@ -0,0 +1,28 @@
name: nrf port
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'drivers/**'
- 'ports/nrf/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_nrf_setup
- name: Build
run: source tools/ci.sh && ci_nrf_build

28
.github/workflows/ports_powerpc.yml vendored Normal file
View File

@ -0,0 +1,28 @@
name: powerpc port
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'drivers/**'
- 'ports/powerpc/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_powerpc_setup
- name: Build
run: source tools/ci.sh && ci_powerpc_build

44
.github/workflows/ports_qemu.yml vendored Normal file
View File

@ -0,0 +1,44 @@
name: qemu port
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'drivers/**'
- 'ports/qemu/**'
- 'tests/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build_and_test_arm:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_qemu_setup_arm
- name: Build and run test suite
run: source tools/ci.sh && ci_qemu_build_arm
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
build_and_test_rv32:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_qemu_setup_rv32
- name: Build and run test suite
run: source tools/ci.sh && ci_qemu_build_rv32
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures

29
.github/workflows/ports_renesas-ra.yml vendored Normal file
View File

@ -0,0 +1,29 @@
name: renesas-ra port
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'drivers/**'
- 'ports/renesas-ra/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build_renesas_ra_board:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_renesas_ra_setup
- name: Build
run: source tools/ci.sh && ci_renesas_ra_board_build

33
.github/workflows/ports_rp2.yml vendored Normal file
View File

@ -0,0 +1,33 @@
name: rp2 port
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'drivers/**'
- 'ports/rp2/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
defaults:
run:
working-directory: 'micropython repo' # test build with space in path
steps:
- uses: actions/checkout@v4
with:
path: 'micropython repo'
- name: Install packages
run: source tools/ci.sh && ci_rp2_setup
- name: Build
run: source tools/ci.sh && ci_rp2_build

28
.github/workflows/ports_samd.yml vendored Normal file
View File

@ -0,0 +1,28 @@
name: samd port
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'drivers/**'
- 'ports/samd/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_samd_setup
- name: Build
run: source tools/ci.sh && ci_samd_build

36
.github/workflows/ports_stm32.yml vendored Normal file
View File

@ -0,0 +1,36 @@
name: stm32 port
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'drivers/**'
- 'ports/stm32/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build_stm32:
strategy:
fail-fast: false
matrix:
ci_func: # names are functions in ci.sh
- stm32_pyb_build
- stm32_nucleo_build
- stm32_misc_build
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_stm32_setup
- name: Build ci_${{matrix.ci_func }}
run: source tools/ci.sh && ci_${{ matrix.ci_func }}

251
.github/workflows/ports_unix.yml vendored Normal file
View File

@ -0,0 +1,251 @@
name: unix port
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'examples/**'
- 'mpy-cross/**'
- 'ports/unix/**'
- 'tests/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
minimal:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build
run: source tools/ci.sh && ci_unix_minimal_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_minimal_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
reproducible:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build with reproducible date
run: source tools/ci.sh && ci_unix_minimal_build
env:
SOURCE_DATE_EPOCH: 1234567890
- name: Check reproducible build date
run: echo | ports/unix/build-minimal/micropython -i | grep 'on 2009-02-13;'
standard:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build
run: source tools/ci.sh && ci_unix_standard_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_standard_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
standard_v2:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build
run: source tools/ci.sh && ci_unix_standard_v2_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_standard_v2_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
coverage:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_unix_coverage_setup
- name: Build
run: source tools/ci.sh && ci_unix_coverage_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_coverage_run_tests
- name: Test merging .mpy files
run: source tools/ci.sh && ci_unix_coverage_run_mpy_merge_tests
- name: Build native mpy modules
run: source tools/ci.sh && ci_native_mpy_modules_build
- name: Test importing .mpy generated by mpy_ld.py
run: source tools/ci.sh && ci_unix_coverage_run_native_mpy_tests
- name: Run gcov coverage analysis
run: |
(cd ports/unix && gcov -o build-coverage/py ../../py/*.c || true)
(cd ports/unix && gcov -o build-coverage/extmod ../../extmod/*.c || true)
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
fail_ci_if_error: true
verbose: true
token: ${{ secrets.CODECOV_TOKEN }}
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
coverage_32bit:
runs-on: ubuntu-20.04 # use 20.04 to get libffi-dev:i386
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_unix_32bit_setup
- name: Build
run: source tools/ci.sh && ci_unix_coverage_32bit_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_coverage_32bit_run_tests
- name: Build native mpy modules
run: source tools/ci.sh && ci_native_mpy_modules_32bit_build
- name: Test importing .mpy generated by mpy_ld.py
run: source tools/ci.sh && ci_unix_coverage_32bit_run_native_mpy_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
nanbox:
runs-on: ubuntu-20.04 # use 20.04 to get python2, and libffi-dev:i386
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_unix_32bit_setup
- name: Build
run: source tools/ci.sh && ci_unix_nanbox_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_nanbox_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
float:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build
run: source tools/ci.sh && ci_unix_float_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_float_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
stackless_clang:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_unix_clang_setup
- name: Build
run: source tools/ci.sh && ci_unix_stackless_clang_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_stackless_clang_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
float_clang:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_unix_clang_setup
- name: Build
run: source tools/ci.sh && ci_unix_float_clang_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_float_clang_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
settrace:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build
run: source tools/ci.sh && ci_unix_settrace_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_settrace_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
settrace_stackless:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build
run: source tools/ci.sh && ci_unix_settrace_stackless_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_settrace_stackless_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
macos:
runs-on: macos-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.8'
- name: Build
run: source tools/ci.sh && ci_unix_macos_build
- name: Run tests
run: source tools/ci.sh && ci_unix_macos_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
qemu_mips:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_unix_qemu_mips_setup
- name: Build
run: source tools/ci.sh && ci_unix_qemu_mips_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_qemu_mips_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
qemu_arm:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_unix_qemu_arm_setup
- name: Build
run: source tools/ci.sh && ci_unix_qemu_arm_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_qemu_arm_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures
qemu_riscv64:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_unix_qemu_riscv64_setup
- name: Build
run: source tools/ci.sh && ci_unix_qemu_riscv64_build
- name: Run main test suite
run: source tools/ci.sh && ci_unix_qemu_riscv64_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures

32
.github/workflows/ports_webassembly.yml vendored Normal file
View File

@ -0,0 +1,32 @@
name: webassembly port
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'ports/webassembly/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_webassembly_setup
- name: Build
run: source tools/ci.sh && ci_webassembly_build
- name: Run tests
run: source tools/ci.sh && ci_webassembly_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures

146
.github/workflows/ports_windows.yml vendored Normal file
View File

@ -0,0 +1,146 @@
name: windows port
on:
push:
pull_request:
paths:
- '.github/workflows/*.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'ports/unix/**'
- 'ports/windows/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build-vs:
strategy:
fail-fast: false
matrix:
platform: [x86, x64]
configuration: [Debug, Release]
variant: [dev, standard]
visualstudio: ['2017', '2019', '2022']
include:
- visualstudio: '2017'
runner: windows-latest
vs_version: '[15, 16)'
- visualstudio: '2019'
runner: windows-2019
vs_version: '[16, 17)'
- visualstudio: '2022'
runner: windows-2022
vs_version: '[17, 18)'
# trim down the number of jobs in the matrix
exclude:
- variant: standard
configuration: Debug
- visualstudio: '2019'
configuration: Debug
env:
CI_BUILD_CONFIGURATION: ${{ matrix.configuration }}
runs-on: ${{ matrix.runner }}
steps:
- name: Install Visual Studio 2017
if: matrix.visualstudio == '2017'
run: |
choco install visualstudio2017buildtools
choco install visualstudio2017-workload-vctools
choco install windows-sdk-8.1
- uses: microsoft/setup-msbuild@v2
with:
vs-version: ${{ matrix.vs_version }}
- uses: actions/setup-python@v5
if: matrix.runner == 'windows-2019'
with:
python-version: '3.9'
- uses: actions/checkout@v4
- name: Build mpy-cross.exe
run: msbuild mpy-cross\mpy-cross.vcxproj -maxcpucount -property:Configuration=${{ matrix.configuration }} -property:Platform=${{ matrix.platform }}
- name: Update submodules
run: git submodule update --init lib/micropython-lib
- name: Build micropython.exe
run: msbuild ports\windows\micropython.vcxproj -maxcpucount -property:Configuration=${{ matrix.configuration }} -property:Platform=${{ matrix.platform }} -property:PyVariant=${{ matrix.variant }}
- name: Get micropython.exe path
id: get_path
run: |
$exePath="$(msbuild ports\windows\micropython.vcxproj -nologo -v:m -t:ShowTargetPath -property:Configuration=${{ matrix.configuration }} -property:Platform=${{ matrix.platform }} -property:PyVariant=${{ matrix.variant }})"
echo ("micropython=" + $exePath.Trim()) >> $env:GITHUB_OUTPUT
- name: Run tests
id: test
env:
MICROPY_MICROPYTHON: ${{ steps.get_path.outputs.micropython }}
working-directory: tests
run: python run-tests.py
- name: Print failures
if: failure() && steps.test.conclusion == 'failure'
working-directory: tests
run: python run-tests.py --print-failures
- name: Run mpy tests
id: test_mpy
env:
MICROPY_MICROPYTHON: ${{ steps.get_path.outputs.micropython }}
working-directory: tests
run: python run-tests.py --via-mpy -d basics float micropython
- name: Print mpy failures
if: failure() && steps.test_mpy.conclusion == 'failure'
working-directory: tests
run: python run-tests.py --print-failures
build-mingw:
strategy:
fail-fast: false
matrix:
variant: [dev, standard]
sys: [mingw32, mingw64]
include:
- sys: mingw32
env: i686
- sys: mingw64
env: x86_64
runs-on: windows-2022
env:
CHERE_INVOKING: enabled_from_arguments
defaults:
run:
shell: msys2 {0}
steps:
- uses: msys2/setup-msys2@v2
with:
msystem: ${{ matrix.sys }}
update: true
install: >-
make
mingw-w64-${{ matrix.env }}-gcc
pkg-config
mingw-w64-${{ matrix.env }}-python3
git
diffutils
- uses: actions/checkout@v4
- name: Build mpy-cross.exe
run: make -C mpy-cross -j2
- name: Update submodules
run: make -C ports/windows VARIANT=${{ matrix.variant }} submodules
- name: Build micropython.exe
run: make -C ports/windows -j2 VARIANT=${{ matrix.variant }}
- name: Run tests
id: test
run: make -C ports/windows test_full VARIANT=${{ matrix.variant }}
- name: Print failures
if: failure() && steps.test.conclusion == 'failure'
working-directory: tests
run: python run-tests.py --print-failures
cross-build-on-linux:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_windows_setup
- name: Build
run: source tools/ci.sh && ci_windows_build

43
.github/workflows/ports_zephyr.yml vendored Normal file
View File

@ -0,0 +1,43 @@
name: zephyr port
on:
push:
pull_request:
paths:
- '.github/workflows/ports_zephyr.yml'
- 'tools/**'
- 'py/**'
- 'extmod/**'
- 'shared/**'
- 'lib/**'
- 'ports/zephyr/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: jlumbroso/free-disk-space@main
with:
# Only free up a few things so this step runs quickly.
android: false
dotnet: true
haskell: true
large-packages: false
docker-images: false
swap-storage: false
- uses: actions/checkout@v4
- name: Install packages
run: source tools/ci.sh && ci_zephyr_setup
- name: Install Zephyr
run: source tools/ci.sh && ci_zephyr_install
- name: Build
run: source tools/ci.sh && ci_zephyr_build
- name: Run main test suite
run: source tools/ci.sh && ci_zephyr_run_tests
- name: Print failures
if: failure()
run: tests/run-tests.py --print-failures

13
.github/workflows/ruff.yml vendored Normal file
View File

@ -0,0 +1,13 @@
name: Python code lint and formatting with ruff
on: [push, pull_request]
jobs:
ruff:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# ruff version should be kept in sync with .pre-commit-config.yaml
- run: pip install --user ruff==0.1.3
- run: ruff check --output-format=github .
- run: ruff format --diff .

3
.gitignore vendored
View File

@ -53,8 +53,9 @@ build/
build-*/ build-*/
docs/genrst/ docs/genrst/
# Test failure outputs # Test failure outputs and intermediate artefacts
tests/results/* tests/results/*
tests/ports/unix/ffi_lib.so
# Python cache files # Python cache files
__pycache__/ __pycache__/

16
.gitmodules vendored
View File

@ -3,13 +3,13 @@
url = https://github.com/micropython/axtls.git url = https://github.com/micropython/axtls.git
[submodule "lib/libffi"] [submodule "lib/libffi"]
path = lib/libffi path = lib/libffi
url = https://github.com/atgreen/libffi url = https://github.com/libffi/libffi
[submodule "lib/lwip"] [submodule "lib/lwip"]
path = lib/lwip path = lib/lwip
url = https://github.com/lwip-tcpip/lwip.git url = https://github.com/lwip-tcpip/lwip.git
[submodule "lib/berkeley-db-1.xx"] [submodule "lib/berkeley-db-1.xx"]
path = lib/berkeley-db-1.xx path = lib/berkeley-db-1.xx
url = https://github.com/pfalcon/berkeley-db-1.xx url = https://github.com/micropython/berkeley-db-1.xx
[submodule "lib/stm32lib"] [submodule "lib/stm32lib"]
path = lib/stm32lib path = lib/stm32lib
url = https://github.com/micropython/stm32lib url = https://github.com/micropython/stm32lib
@ -59,3 +59,15 @@
[submodule "lib/micropython-lib"] [submodule "lib/micropython-lib"]
path = lib/micropython-lib path = lib/micropython-lib
url = https://github.com/micropython/micropython-lib.git url = https://github.com/micropython/micropython-lib.git
[submodule "lib/protobuf-c"]
path = lib/protobuf-c
url = https://github.com/protobuf-c/protobuf-c.git
[submodule "lib/open-amp"]
path = lib/open-amp
url = https://github.com/OpenAMP/open-amp.git
[submodule "lib/libmetal"]
path = lib/libmetal
url = https://github.com/OpenAMP/libmetal.git
[submodule "lib/arduino-lib"]
path = lib/arduino-lib
url = https://github.com/arduino/arduino-lib-mpy.git

View File

@ -2,8 +2,8 @@ repos:
- repo: local - repo: local
hooks: hooks:
- id: codeformat - id: codeformat
name: MicroPython codeformat.py for changed files name: MicroPython codeformat.py for changed C files
entry: tools/codeformat.py -v -f entry: tools/codeformat.py -v -c -f
language: python language: python
- id: verifygitlog - id: verifygitlog
name: MicroPython git commit message format checker name: MicroPython git commit message format checker
@ -11,3 +11,17 @@ repos:
language: python language: python
verbose: true verbose: true
stages: [commit-msg] stages: [commit-msg]
- repo: https://github.com/charliermarsh/ruff-pre-commit
# Version should be kept in sync with .github/workflows/ruff.yml
rev: v0.1.3
hooks:
- id: ruff
- id: ruff-format
- repo: https://github.com/codespell-project/codespell
# Version should be kept in sync with .github/workflows/codespell.yml
rev: v2.2.6
hooks:
- id: codespell
name: Spellcheck for changed files (codespell)
additional_dependencies:
- tomli

View File

@ -11,7 +11,7 @@ It's also ok to drop file extensions.
Besides prefix, first line of a commit message should describe a Besides prefix, first line of a commit message should describe a
change clearly and to the point, and be a grammatical sentence with change clearly and to the point, and be a grammatical sentence with
final full stop. First line should fit within 72 characters. Examples final full stop. First line must fit within 72 characters. Examples
of good first line of commit messages: of good first line of commit messages:
py/objstr: Add splitlines() method. py/objstr: Add splitlines() method.
@ -27,12 +27,9 @@ change beyond 5 lines would likely require such detailed description.
To get good practical examples of good commits and their messages, browse To get good practical examples of good commits and their messages, browse
the `git log` of the project. the `git log` of the project.
When committing you are encouraged to sign-off your commit by adding When committing you must sign-off your commit by adding "Signed-off-by:"
"Signed-off-by" lines and similar, eg using "git commit -s". If you don't line(s) at the end of the commit message, e.g. using `git commit -s`. You
explicitly sign-off in this way then the commit message, which includes your are then certifying and signing off against the following:
name and email address in the "Author" line, implies your sign-off. In either
case, of explicit or implicit sign-off, you are certifying and signing off
against the following:
* That you wrote the change yourself, or took it from a project with * That you wrote the change yourself, or took it from a project with
a compatible license (in the latter case the commit message, and possibly a compatible license (in the latter case the commit message, and possibly
@ -49,21 +46,23 @@ against the following:
* Your contribution including commit message will be publicly and * Your contribution including commit message will be publicly and
indefinitely available for anyone to access, including redistribution indefinitely available for anyone to access, including redistribution
under the terms of the project's license. under the terms of the project's license.
* Your signature for all of the above, which is the "Signed-off-by" line * Your signature for all of the above, which is the "Signed-off-by" line,
or the "Author" line in the commit message, includes your full real name and includes your full real name and a valid and active email address by
a valid and active email address by which you can be contacted in the which you can be contacted in the foreseeable future.
foreseeable future.
Code auto-formatting Code auto-formatting
==================== ====================
Both C and Python code are auto-formatted using the `tools/codeformat.py` Both C and Python code formatting are controlled for consistency across the
script. This uses [uncrustify](https://github.com/uncrustify/uncrustify) to MicroPython codebase. C code is formatted using the `tools/codeformat.py`
format C code and [black](https://github.com/psf/black) to format Python code. script which uses [uncrustify](https://github.com/uncrustify/uncrustify).
After making changes, and before committing, run this tool to reformat your Python code is linted and formatted using
changes to the correct style. Without arguments this tool will reformat all [ruff & ruff format](https://github.com/astral-sh/ruff).
source code (and may take some time to run). Otherwise pass as arguments to After making changes, and before committing, run `tools/codeformat.py` to
the tool the files that changed and it will only reformat those. reformat your C code and `ruff format` for any Python code. Without
arguments this tool will reformat all source code (and may take some time
to run). Otherwise pass as arguments to the tool the files that changed,
and it will only reformat those.
uncrustify uncrustify
========== ==========
@ -105,6 +104,22 @@ This command may work, please raise a new Issue if it doesn't:
curl -L https://github.com/Homebrew/homebrew-core/raw/2b07d8192623365078a8b855a164ebcdf81494a6/Formula/uncrustify.rb > uncrustify.rb && brew install uncrustify.rb && rm uncrustify.rb curl -L https://github.com/Homebrew/homebrew-core/raw/2b07d8192623365078a8b855a164ebcdf81494a6/Formula/uncrustify.rb > uncrustify.rb && brew install uncrustify.rb && rm uncrustify.rb
``` ```
Code spell checking
===================
Code spell checking is done using [codespell](https://github.com/codespell-project/codespell#codespell)
and runs in a GitHub action in CI. Codespell is configured via `pyproject.toml`
to avoid false positives. It is recommended run codespell before submitting a
PR. To simplify this, codespell is configured as a pre-commit hook and will be
installed if you run `pre-commit install` (see below).
If you want to install and run codespell manually, you can do so by running:
```
$ pip install codespell tomli
$ codespell
```
Automatic Pre-Commit Hooks Automatic Pre-Commit Hooks
========================== ==========================
@ -155,12 +170,22 @@ Tips:
* To ignore the pre-commit message format check temporarily, start the commit * To ignore the pre-commit message format check temporarily, start the commit
message subject line with "WIP" (for "Work In Progress"). message subject line with "WIP" (for "Work In Progress").
Running pre-commit manually
===========================
Once pre-commit is installed as per the previous section it can be manually
run against the MicroPython python codebase to update file formatting on
demand, with either:
* `pre-commit run --all-files` to fix all files in the MicroPython codebase
* `pre-commit run --file ./path/to/my/file` to fix just one file
* `pre-commit run --file ./path/to/my/folder/*` to fix just one folder
Python code conventions Python code conventions
======================= =======================
Python code follows [PEP 8](https://legacy.python.org/dev/peps/pep-0008/) and Python code follows [PEP 8](https://legacy.python.org/dev/peps/pep-0008/) and
is auto-formatted using [black](https://github.com/psf/black) with a line-length is auto-formatted using [ruff format](https://docs.astral.sh/ruff/formatter)
of 99 characters. with a line-length of 99 characters.
Naming conventions: Naming conventions:
- Module names are short and all lowercase; eg pyb, stm. - Module names are short and all lowercase; eg pyb, stm.
@ -255,7 +280,7 @@ Documentation conventions
========================= =========================
MicroPython generally follows CPython in documentation process and MicroPython generally follows CPython in documentation process and
conventions. reStructuredText syntax is used for the documention. conventions. reStructuredText syntax is used for the documentation.
Specific conventions/suggestions: Specific conventions/suggestions:

14
LICENSE
View File

@ -1,6 +1,6 @@
The MIT License (MIT) The MIT License (MIT)
Copyright (c) 2013-2023 Damien P. George Copyright (c) 2013-2024 Damien P. George
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal
@ -36,7 +36,6 @@ used during the build process and is not part of the compiled source code.
/ (MIT) / (MIT)
/drivers /drivers
/cc3100 (BSD-3-clause) /cc3100 (BSD-3-clause)
/wiznet5k (BSD-3-clause)
/lib /lib
/asf4 (Apache-2.0) /asf4 (Apache-2.0)
/axtls (BSD-3-clause) /axtls (BSD-3-clause)
@ -49,24 +48,31 @@ used during the build process and is not part of the compiled source code.
/cmsis (BSD-3-clause) /cmsis (BSD-3-clause)
/crypto-algorithms (NONE) /crypto-algorithms (NONE)
/libhydrogen (ISC) /libhydrogen (ISC)
/libmetal (BSD-3-clause)
/littlefs (BSD-3-clause) /littlefs (BSD-3-clause)
/lwip (BSD-3-clause) /lwip (BSD-3-clause)
/mynewt-nimble (Apache-2.0) /mynewt-nimble (Apache-2.0)
/nrfx (BSD-3-clause) /nrfx (BSD-3-clause)
/nxp_driver (BSD-3-Clause) /nxp_driver (BSD-3-Clause)
/oofatfs (BSD-1-clause) /oofatfs (BSD-1-clause)
/open-amp (BSD-3-clause)
/pico-sdk (BSD-3-clause) /pico-sdk (BSD-3-clause)
/re15 (BSD-3-clause) /re15 (BSD-3-clause)
/stm32lib (BSD-3-clause) /stm32lib (BSD-3-clause)
/tinytest (BSD-3-clause)
/tinyusb (MIT) /tinyusb (MIT)
/uzlib (Zlib) /uzlib (Zlib)
/wiznet5k (MIT)
/logo (uses OFL-1.1) /logo (uses OFL-1.1)
/ports /ports
/cc3200 /cc3200
/hal (BSD-3-clause) /hal (BSD-3-clause)
/simplelink (BSD-3-clause) /simplelink (BSD-3-clause)
/FreeRTOS (GPL-2.0 with FreeRTOS exception) /FreeRTOS (GPL-2.0 with FreeRTOS exception)
/esp32
/ppp_set_auth.* (Apache-2.0)
/rp2
/mutex_extra.c (BSD-3-clause)
/clocks_extra.c (BSD-3-clause)
/stm32 /stm32
/usbd*.c (MCD-ST Liberty SW License Agreement V2) /usbd*.c (MCD-ST Liberty SW License Agreement V2)
/stm32_it.* (MIT + BSD-3-clause) /stm32_it.* (MIT + BSD-3-clause)
@ -76,8 +82,6 @@ used during the build process and is not part of the compiled source code.
/*/stm32*.h (BSD-3-clause) /*/stm32*.h (BSD-3-clause)
/usbdev (MCD-ST Liberty SW License Agreement V2) /usbdev (MCD-ST Liberty SW License Agreement V2)
/usbhost (MCD-ST Liberty SW License Agreement V2) /usbhost (MCD-ST Liberty SW License Agreement V2)
/teensy
/core (PJRC.COM)
/zephyr /zephyr
/src (Apache-2.0) /src (Apache-2.0)
/tools /tools

209
README.md
View File

@ -1,200 +1,4 @@
# Micropython + lvgl [![Unix CI badge](https://github.com/micropython/micropython/actions/workflows/ports_unix.yml/badge.svg)](https://github.com/micropython/micropython/actions?query=branch%3Amaster+event%3Apush) [![STM32 CI badge](https://github.com/micropython/micropython/actions/workflows/ports_stm32.yml/badge.svg)](https://github.com/micropython/micropython/actions?query=branch%3Amaster+event%3Apush) [![Docs CI badge](https://github.com/micropython/micropython/actions/workflows/docs.yml/badge.svg)](https://docs.micropython.org/) [![codecov](https://codecov.io/gh/micropython/micropython/branch/master/graph/badge.svg?token=I92PfD05sD)](https://codecov.io/gh/micropython/micropython)
**Micropython bindings to LVGL for Embedded devices, Unix and JavaScript**
[![Build lv_micropython unix port](https://github.com/lvgl/lv_micropython/actions/workflows/unix_port.yml/badge.svg)](https://github.com/lvgl/lv_micropython/actions/workflows/unix_port.yml)
[![Build lv_micropython stm32 port](https://github.com/lvgl/lv_micropython/actions/workflows/stm32_port.yml/badge.svg)](https://github.com/lvgl/lv_micropython/actions/workflows/stm32_port.yml)
[![esp32 port](https://github.com/lvgl/lv_micropython/actions/workflows/ports_esp32.yml/badge.svg)](https://github.com/lvgl/lv_micropython/actions/workflows/ports_esp32.yml) [![Build lv_micropython rp2 port](https://github.com/lvgl/lv_micropython/actions/workflows/rp2_port.yml/badge.svg)](https://github.com/lvgl/lv_micropython/actions/workflows/rp2_port.yml)
[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/lvgl/lv_micropython)
To quickly run Micropython + LVGL from your web browser you can also use the [Online Simulator](https://sim.lvgl.io/).
**For information about Micropython lvgl bindings please refer to [lv_binding_micropython/README.md](https://github.com/lvgl/lv_binding_micropython/blob/master/README.md)**
See also [Micropython + LittlevGL](https://blog.lvgl.io/2019-02-20/micropython-bindings) blog post. (LittlevGL is LVGL's previous name.)
For questions and discussions - please use the forum: https://forum.lvgl.io/c/micropython
Original micropython README: https://github.com/micropython/micropython/blob/master/README.md
## Relationship between `lv_micropython` and `lv_binding_micropython`
Originally, `lv_micropython` was created as an example of how to use [lv_binding_micropython](https://github.com/lvgl/lv_binding_micropython) on a Micropython fork.
As such, we try to keep changes here as minimal as possible and we try to keep it in sync with Micropython upstream releases. We also try to add changes to `lv_binding_micropython` instead of to `lv_micropython`, when possible. (for example we keep all drivers in `lv_binding_micropython`, the ESP32 CMake functionality etc.)
Eventually it turned out that many people prefer using `lv_micropython` directly and only a few use it as a reference to support LVGL on their own Micropython fork.
If you are only starting with Micropython+LVGL, it's recommended that you use `lv_micropython`, while porting a Micropython fork to LVGL is for advanced users.
## Build Instructions
First step is always to clone lv_micropython and update its submodules recursively:
```
git clone https://github.com/lvgl/lv_micropython.git
cd lv_micropython
git submodule update --init --recursive lib/lv_bindings
```
Next you should build mpy-cross
```
make -C mpy-cross
```
Port specific steps usually include updating the port's submodules with `make submodules` and running make for the port itself.
### Unix (Linux) port
1. `sudo apt-get install build-essential libreadline-dev libffi-dev git pkg-config libsdl2-2.0-0 libsdl2-dev python3.8 parallel`
Python 3 is required, but you can install some other version of python3 instead of 3.8, if needed.
2. `git clone https://github.com/lvgl/lv_micropython.git`
3. `cd lv_micropython`
4. `git submodule update --init --recursive lib/lv_bindings`
5. `make -C mpy-cross`
6. `make -C ports/unix submodules`
7. `make -C ports/unix`
8. `./ports/unix/micropython`
## Unix (MAC OS) port
1. `brew install sdl2 pkg-config`
2. `git clone https://github.com/lvgl/lv_micropython.git`
3. `cd lv_micropython`
4. `git submodule update --init --recursive lib/lv_bindings`
5. `sudo mkdir -p /usr/local/lib/`
6. `sudo cp /opt/homebrew/Cellar/sdl2/2.24.0/lib/libSDL2.dylib /usr/local/lib/`
7. `sudo cp -r /opt/homebrew/Cellar/sdl2/2.24.0/include /usr/local/`
8. `sed -i '' 's/ -Werror//' ports/unix/Makefile mpy-cross/Makefile` Remove -Werror from compiler parameters as Mac fails compilation otherwise
9. `make -C mpy-cross`
10. `make -C ports/unix submodules`
11. `make -C ports/unix`
12. `./ports/unix/build-standard/micropython`
### ESP32 port
Please run `esp-idf/export.sh` from your ESP-IDF installation directory as explained in the [Micropython ESP32 Getting Started documentation](https://docs.espressif.com/projects/esp-idf/en/stable/esp32/get-started/#get-started-export)
ESP-IDF version needs to match Micropython expected esp-idf, otherwise a warning will be displayed (and build will probably fail)
For more details refer to [Setting up the toolchain and ESP-IDF](https://github.com/lvgl/lv_micropython/blob/master/ports/esp32/README.md#setting-up-the-toolchain-and-esp-idf)
When using IL9341 driver, the color depth need to be set to match ILI9341. This can be done from the command line.
Here is the command to build ESP32 + LVGL which is compatible with ILI9341 driver:
```
make -C mpy-cross
make -C ports/esp32 LV_CFLAGS="-DLV_COLOR_DEPTH=16" BOARD=GENERIC_SPIRAM deploy
```
Explanation about the paramters:
- `LV_CFLAGS` are used to override color depth, for ILI9341 compatibility.
- `LV_COLOR_DEPTH=16` is needed if you plan to use the ILI9341 driver.
- `BOARD` - I use WROVER board with SPIRAM. You can choose other boards from `ports/esp32/boards/` directory.
- `deploy` - make command will create ESP32 port of Micropython, and will try to deploy it through USB-UART bridge.
For more details please refer to [Micropython ESP32 README](https://github.com/micropython/micropython/blob/master/ports/esp32/README.md).
### JavaScript port
Refer to the README of the `lvgl_javascript` branch: https://github.com/lvgl/lv_micropython/tree/lvgl_javascript_v8#javascript-port
### Raspberry Pi Pico port
This port uses [Micropython infrastructure for C modules](https://docs.micropython.org/en/latest/develop/cmodules.html#compiling-the-cmodule-into-micropython) and `USER_C_MODULES` must be given:
1. `git clone https://github.com/lvgl/lv_micropython.git`
2. `cd lv_micropython`
3. `git submodule update --init --recursive lib/lv_bindings`
4. `make -C ports/rp2 BOARD=PICO submodules`
5. `make -j -C mpy-cross`
6. `make -j -C ports/rp2 BOARD=PICO USER_C_MODULES=../../lib/lv_bindings/bindings.cmake`
#### Troubleshooting
If you experience unstable behaviour, it is worth checking the value of *MICROPY_HW_FLASH_STORAGE_BASE* against the value of *__flash_binary_end* from the firmware.elf.map file.
If the storage base is lower than the binary end, parts of the firmware will be overwritten when the micropython filesystem is initialised.
## Super Simple Example
First, LVGL needs to be imported and initialized
```python
import lvgl as lv
lv.init()
```
Then event loop, display driver and input driver needs to be registered.
Refer to [Porting the library](https://docs.lvgl.io/8.0/porting/index.html) for more information.
Here is an example of registering SDL drivers on Micropython unix port:
```python
# Create an event loop and Register SDL display/mouse/keyboard drivers.
from lv_utils import event_loop
WIDTH = 480
HEIGHT = 320
event_loop = event_loop()
disp_drv = lv.sdl_window_create(WIDTH, HEIGHT)
mouse = lv.sdl_mouse_create()
keyboard = lv.sdl_keyboard_create()
keyboard.set_group(self.group)
```
Here is an alternative example, for registering ILI9341 drivers on Micropython ESP32 port:
```python
import lvgl as lv
# Import ILI9341 driver and initialized it
from ili9341 import ili9341
disp = ili9341()
# Import XPT2046 driver and initalize it
from xpt2046 import xpt2046
touch = xpt2046()
```
By default, both ILI9341 and XPT2046 are initialized on the same SPI bus with the following parameters:
- ILI9341: `miso=5, mosi=18, clk=19, cs=13, dc=12, rst=4, power=14, backlight=15, spihost=esp.HSPI_HOST, mhz=40, factor=4, hybrid=True`
- XPT2046: `cs=25, spihost=esp.HSPI_HOST, mhz=5, max_cmds=16, cal_x0 = 3783, cal_y0 = 3948, cal_x1 = 242, cal_y1 = 423, transpose = True, samples = 3`
You can change any of these parameters on ili9341/xpt2046 constructor.
You can also initalize them on different SPI buses if you want, by providing miso/mosi/clk parameters. Set them to -1 to use existing (initialized) spihost bus.
Now you can create the GUI itself:
```python
# Create a screen with a button and a label
scr = lv.obj()
btn = lv.btn(scr)
btn.align_to(lv.scr_act(), lv.ALIGN.CENTER, 0, 0)
label = lv.label(btn)
label.set_text("Hello World!")
# Load the screen
lv.scr_load(scr)
```
## More information
More info about LVGL:
- Website https://lvgl.io
- GitHub: https://github.com/lvgl/lvgl
- Documentation: https://docs.lvgl.io/master/get-started/bindings/micropython.html
- Examples: https://docs.lvgl.io/master/examples.html
- More examples: https://github.com/lvgl/lv_binding_micropython/tree/master/examples
More info about lvgl Micropython bindings:
- https://github.com/lvgl/lv_binding_micropython/blob/master/README.md
Discussions about the Micropython binding: https://github.com/lvgl/lvgl/issues/557
More info about the unix port: https://github.com/micropython/micropython/wiki/Getting-Started#debian-ubuntu-mint-and-variants
The MicroPython project The MicroPython project
======================= =======================
@ -215,7 +19,7 @@ Python 3.5 and some select features from later versions). The following core
datatypes are provided: `str`(including basic Unicode support), `bytes`, datatypes are provided: `str`(including basic Unicode support), `bytes`,
`bytearray`, `tuple`, `list`, `dict`, `set`, `frozenset`, `array.array`, `bytearray`, `tuple`, `list`, `dict`, `set`, `frozenset`, `array.array`,
`collections.namedtuple`, classes and instances. Builtin modules include `collections.namedtuple`, classes and instances. Builtin modules include
`os`, `sys`, `time`, `re`, and `struct`, etc. Select ports have support for `os`, `sys`, `time`, `re`, and `struct`, etc. Some ports have support for
`_thread` module (multithreading), `socket` and `ssl` for networking, and `_thread` module (multithreading), `socket` and `ssl` for networking, and
`asyncio`. Note that only a subset of Python 3 functionality is implemented `asyncio`. Note that only a subset of Python 3 functionality is implemented
for the data types and modules. for the data types and modules.
@ -231,8 +35,8 @@ DAC, PWM, SPI, I2C, CAN, Bluetooth, and USB.
Getting started Getting started
--------------- ---------------
See the [online documentation](https://docs.micropython.org/) for API See the [online documentation](https://docs.micropython.org/) for the API
references and information about using MicroPython and information about how reference and information about using MicroPython and information about how
it is implemented. it is implemented.
We use [GitHub Discussions](https://github.com/micropython/micropython/discussions) We use [GitHub Discussions](https://github.com/micropython/micropython/discussions)
@ -304,18 +108,17 @@ track of the code size of the core runtime and VM.
In addition, the following ports are provided in this repository: In addition, the following ports are provided in this repository:
- [cc3200](ports/cc3200) -- Texas Instruments CC3200 (including PyCom WiPy). - [cc3200](ports/cc3200) -- Texas Instruments CC3200 (including PyCom WiPy).
- [esp32](ports/esp32) -- Espressif ESP32 SoC (including ESP32S2, ESP32S3, ESP32C3). - [esp32](ports/esp32) -- Espressif ESP32 SoC (including ESP32S2, ESP32S3, ESP32C3, ESP32C6).
- [esp8266](ports/esp8266) -- Espressif ESP8266 SoC. - [esp8266](ports/esp8266) -- Espressif ESP8266 SoC.
- [mimxrt](ports/mimxrt) -- NXP m.iMX RT (including Teensy 4.x). - [mimxrt](ports/mimxrt) -- NXP m.iMX RT (including Teensy 4.x).
- [nrf](ports/nrf) -- Nordic Semiconductor nRF51 and nRF52. - [nrf](ports/nrf) -- Nordic Semiconductor nRF51 and nRF52.
- [pic16bit](ports/pic16bit) -- Microchip PIC 16-bit. - [pic16bit](ports/pic16bit) -- Microchip PIC 16-bit.
- [powerpc](ports/powerpc) -- IBM PowerPC (including Microwatt) - [powerpc](ports/powerpc) -- IBM PowerPC (including Microwatt)
- [qemu-arm](ports/qemu-arm) -- QEMU-based emulated target, for testing) - [qemu](ports/qemu) -- QEMU-based emulated target (for testing)
- [renesas-ra](ports/renesas-ra) -- Renesas RA family. - [renesas-ra](ports/renesas-ra) -- Renesas RA family.
- [rp2](ports/rp2) -- Raspberry Pi RP2040 (including Pico and Pico W). - [rp2](ports/rp2) -- Raspberry Pi RP2040 (including Pico and Pico W).
- [samd](ports/samd) -- Microchip (formerly Atmel) SAMD21 and SAMD51. - [samd](ports/samd) -- Microchip (formerly Atmel) SAMD21 and SAMD51.
- [stm32](ports/stm32) -- STMicroelectronics STM32 family (including F0, F4, F7, G0, G4, H7, L0, L4, WB) - [stm32](ports/stm32) -- STMicroelectronics STM32 family (including F0, F4, F7, G0, G4, H7, L0, L4, WB)
- [teensy](ports/teensy) -- Teensy 3.x.
- [webassembly](ports/webassembly) -- Emscripten port targeting browsers and NodeJS. - [webassembly](ports/webassembly) -- Emscripten port targeting browsers and NodeJS.
- [zephyr](ports/zephyr) -- Zephyr RTOS. - [zephyr](ports/zephyr) -- Zephyr RTOS.

View File

@ -19,55 +19,56 @@ import os
# If extensions (or modules to document with autodoc) are in another directory, # If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the # add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here. # documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('.')) sys.path.insert(0, os.path.abspath("."))
# The MICROPY_VERSION env var should be "vX.Y.Z" (or unset).
micropy_version = os.getenv("MICROPY_VERSION") or "latest"
micropy_all_versions = (os.getenv("MICROPY_ALL_VERSIONS") or "latest").split(",")
url_pattern = "%s/en/%%s" % (os.getenv("MICROPY_URL_PREFIX") or "/",)
# The members of the html_context dict are available inside topindex.html # The members of the html_context dict are available inside topindex.html
micropy_version = os.getenv('MICROPY_VERSION') or 'latest'
micropy_all_versions = (os.getenv('MICROPY_ALL_VERSIONS') or 'latest').split(',')
url_pattern = '%s/en/%%s' % (os.getenv('MICROPY_URL_PREFIX') or '/',)
html_context = { html_context = {
'cur_version':micropy_version, "cur_version": micropy_version,
'all_versions':[ "all_versions": [(ver, url_pattern % ver) for ver in micropy_all_versions],
(ver, url_pattern % ver) for ver in micropy_all_versions "downloads": [
("PDF", url_pattern % micropy_version + "/micropython-docs.pdf"),
], ],
'downloads':[ "is_release": micropy_version != "latest",
('PDF', url_pattern % micropy_version + '/micropython-docs.pdf'),
],
'is_release': micropy_version != 'latest',
} }
# -- General configuration ------------------------------------------------ # -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here. # If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0' # needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be # Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones. # ones.
extensions = [ extensions = [
'sphinx.ext.autodoc', "sphinx.ext.autodoc",
'sphinx.ext.doctest', "sphinx.ext.doctest",
'sphinx.ext.intersphinx', "sphinx.ext.intersphinx",
'sphinx.ext.todo', "sphinx.ext.todo",
'sphinx.ext.coverage', "sphinx.ext.coverage",
"sphinxcontrib.jquery",
] ]
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ['templates'] templates_path = ["templates"]
# The suffix of source filenames. # The suffix of source filenames.
source_suffix = '.rst' source_suffix = ".rst"
# The encoding of source files. # The encoding of source files.
#source_encoding = 'utf-8-sig' # source_encoding = 'utf-8-sig'
# The master toctree document. # The master toctree document.
master_doc = 'index' master_doc = "index"
# General information about the project. # General information about the project.
project = 'MicroPython' project = "MicroPython"
copyright = '- The MicroPython Documentation is Copyright © 2014-2023, Damien P. George, Paul Sokolovsky, and contributors' copyright = "- The MicroPython Documentation is Copyright © 2014-2024, Damien P. George, Paul Sokolovsky, and contributors"
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the # |version| and |release|, also used in various other places throughout the
@ -79,41 +80,41 @@ version = release = micropy_version
# The language for content autogenerated by Sphinx. Refer to documentation # The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages. # for a list of supported languages.
#language = None # language = None
# There are two options for replacing |today|: either, you set today to some # There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used: # non-false value, then it is used:
#today = '' # today = ''
# Else, today_fmt is used as the format for a strftime call. # Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y' # today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and # List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files. # directories to ignore when looking for source files.
exclude_patterns = ['build', '.venv'] exclude_patterns = ["build", ".venv"]
# The reST default role (used for this markup: `text`) to use for all # The reST default role (used for this markup: `text`) to use for all
# documents. # documents.
default_role = 'any' default_role = "any"
# If true, '()' will be appended to :func: etc. cross-reference text. # If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True # add_function_parentheses = True
# If true, the current module name will be prepended to all description # If true, the current module name will be prepended to all description
# unit titles (such as .. function::). # unit titles (such as .. function::).
#add_module_names = True # add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the # If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default. # output. They are ignored by default.
#show_authors = False # show_authors = False
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx' pygments_style = "sphinx"
# A list of ignored prefixes for module index sorting. # A list of ignored prefixes for module index sorting.
#modindex_common_prefix = [] # modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents. # If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False # keep_warnings = False
# Global include files. Sphinx docs suggest using rst_epilog in preference # Global include files. Sphinx docs suggest using rst_epilog in preference
# of rst_prolog, so we follow. Absolute paths below mean "from the base # of rst_prolog, so we follow. Absolute paths below mean "from the base
@ -125,144 +126,148 @@ rst_epilog = """
# -- Options for HTML output ---------------------------------------------- # -- Options for HTML output ----------------------------------------------
# on_rtd is whether we are on readthedocs.org # on_rtd is whether we are on readthedocs.org
on_rtd = os.environ.get('READTHEDOCS', None) == 'True' on_rtd = os.environ.get("READTHEDOCS", None) == "True"
if not on_rtd: # only import and set the theme if we're building docs locally if not on_rtd: # only import and set the theme if we're building docs locally
try: try:
import sphinx_rtd_theme import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path(), '.'] html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path(), "."]
except: except:
html_theme = 'default' html_theme = "default"
html_theme_path = ['.'] html_theme_path = ["."]
else: else:
html_theme_path = ['.'] html_theme_path = ["."]
# Theme options are theme-specific and customize the look and feel of a theme # Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the # further. For a list of options available for each theme, see the
# documentation. # documentation.
#html_theme_options = {} # html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory. # Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = ['.'] # html_theme_path = ['.']
# The name for this set of Sphinx documents. If None, it defaults to # The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation". # "<project> v<release> documentation".
#html_title = None # html_title = None
# A shorter title for the navigation bar. Default is the same as html_title. # A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None # html_short_title = None
# The name of an image file (relative to this directory) to place at the top # The name of an image file (relative to this directory) to place at the top
# of the sidebar. # of the sidebar.
#html_logo = '../../logo/trans-logo.png' # html_logo = '../../logo/trans-logo.png'
# The name of an image file (within the static path) to use as favicon of the # The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large. # pixels large.
html_favicon = 'static/favicon.ico' html_favicon = "static/favicon.ico"
# Add any paths that contain custom static files (such as style sheets) here, # Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files, # relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css". # so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['static'] html_static_path = ["static"]
# Add a custom CSS file for HTML generation # Add a custom CSS file for HTML generation
html_css_files = [ html_css_files = [
'custom.css', "custom.css",
] ]
# Add any extra paths that contain custom files (such as robots.txt or # Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied # .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation. # directly to the root of the documentation.
#html_extra_path = [] # html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format. # using the given strftime format.
html_last_updated_fmt = '%d %b %Y' html_last_updated_fmt = "%d %b %Y"
# If true, SmartyPants will be used to convert quotes and dashes to # If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities. # typographically correct entities.
#html_use_smartypants = True # html_use_smartypants = True
# Custom sidebar templates, maps document names to template names. # Custom sidebar templates, maps document names to template names.
#html_sidebars = {} # html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to # Additional templates that should be rendered to pages, maps page names to
# template names. # template names.
html_additional_pages = {"index": "topindex.html"} html_additional_pages = {"index": "topindex.html"}
# If false, no module index is generated. # If false, no module index is generated.
#html_domain_indices = True # html_domain_indices = True
# If false, no index is generated. # If false, no index is generated.
#html_use_index = True # html_use_index = True
# If true, the index is split into individual pages for each letter. # If true, the index is split into individual pages for each letter.
#html_split_index = False # html_split_index = False
# If true, links to the reST sources are added to the pages. # If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True # html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True # html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True # html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will # If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the # contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served. # base URL from which the finished HTML is served.
#html_use_opensearch = '' # html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml"). # This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None # html_file_suffix = None
# Output file base name for HTML help builder. # Output file base name for HTML help builder.
htmlhelp_basename = 'MicroPythondoc' htmlhelp_basename = "MicroPythondoc"
# -- Options for LaTeX output --------------------------------------------- # -- Options for LaTeX output ---------------------------------------------
latex_elements = { latex_elements = {
# The paper size ('letterpaper' or 'a4paper'). # The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper', #'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt',
#'pointsize': '10pt', # Additional stuff for the LaTeX preamble.
#'preamble': '',
# Additional stuff for the LaTeX preamble. # Include 3 levels of headers in PDF ToC
#'preamble': '', "preamble": r"\setcounter{tocdepth}{2}",
# Include 3 levels of headers in PDF ToC
'preamble': '\setcounter{tocdepth}{2}',
} }
# Grouping the document tree into LaTeX files. List of tuples # Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, # (source start file, target name, title,
# author, documentclass [howto, manual, or own class]). # author, documentclass [howto, manual, or own class]).
latex_documents = [ latex_documents = [
(master_doc, 'MicroPython.tex', 'MicroPython Documentation', (
'Damien P. George, Paul Sokolovsky, and contributors', 'manual'), master_doc,
"MicroPython.tex",
"MicroPython Documentation",
"Damien P. George, Paul Sokolovsky, and contributors",
"manual",
),
] ]
# The name of an image file (relative to this directory) to place at the top of # The name of an image file (relative to this directory) to place at the top of
# the title page. # the title page.
#latex_logo = None # latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts, # For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters. # not chapters.
#latex_use_parts = False # latex_use_parts = False
# If true, show page references after internal links. # If true, show page references after internal links.
#latex_show_pagerefs = False # latex_show_pagerefs = False
# If true, show URL addresses after external links. # If true, show URL addresses after external links.
#latex_show_urls = False # latex_show_urls = False
# Documents to append as an appendix to all manuals. # Documents to append as an appendix to all manuals.
#latex_appendices = [] # latex_appendices = []
# If false, no module index is generated. # If false, no module index is generated.
#latex_domain_indices = True # latex_domain_indices = True
# Enable better Unicode support so that `make latexpdf` doesn't fail # Enable better Unicode support so that `make latexpdf` doesn't fail
latex_engine = "xelatex" latex_engine = "xelatex"
@ -272,12 +277,17 @@ latex_engine = "xelatex"
# One entry per manual page. List of tuples # One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section). # (source start file, name, description, authors, manual section).
man_pages = [ man_pages = [
('index', 'micropython', 'MicroPython Documentation', (
['Damien P. George, Paul Sokolovsky, and contributors'], 1), "index",
"micropython",
"MicroPython Documentation",
["Damien P. George, Paul Sokolovsky, and contributors"],
1,
),
] ]
# If true, show URL addresses after external links. # If true, show URL addresses after external links.
#man_show_urls = False # man_show_urls = False
# -- Options for Texinfo output ------------------------------------------- # -- Options for Texinfo output -------------------------------------------
@ -286,23 +296,29 @@ man_pages = [
# (source start file, target name, title, author, # (source start file, target name, title, author,
# dir menu entry, description, category) # dir menu entry, description, category)
texinfo_documents = [ texinfo_documents = [
(master_doc, 'MicroPython', 'MicroPython Documentation', (
'Damien P. George, Paul Sokolovsky, and contributors', 'MicroPython', 'One line description of project.', master_doc,
'Miscellaneous'), "MicroPython",
"MicroPython Documentation",
"Damien P. George, Paul Sokolovsky, and contributors",
"MicroPython",
"One line description of project.",
"Miscellaneous",
),
] ]
# Documents to append as an appendix to all manuals. # Documents to append as an appendix to all manuals.
#texinfo_appendices = [] # texinfo_appendices = []
# If false, no module index is generated. # If false, no module index is generated.
#texinfo_domain_indices = True # texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'. # How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote' # texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu. # If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False # texinfo_no_detailmenu = False
# Example configuration for intersphinx: refer to the Python standard library. # Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'python': ('https://docs.python.org/3.5', None)} intersphinx_mapping = {"python": ("https://docs.python.org/3.5", None)}

View File

@ -53,13 +53,13 @@ A MicroPython user C module is a directory with the following files:
``SRC_USERMOD_C`` or ``SRC_USERMOD_LIB_C`` variables. The former will be ``SRC_USERMOD_C`` or ``SRC_USERMOD_LIB_C`` variables. The former will be
processed for ``MP_QSTR_`` and ``MP_REGISTER_MODULE`` definitions, the latter processed for ``MP_QSTR_`` and ``MP_REGISTER_MODULE`` definitions, the latter
will not (e.g. helpers and library code that isn't MicroPython-specific). will not (e.g. helpers and library code that isn't MicroPython-specific).
These paths should include your expaned copy of ``$(USERMOD_DIR)``, e.g.:: These paths should include your expanded copy of ``$(USERMOD_DIR)``, e.g.::
SRC_USERMOD_C += $(EXAMPLE_MOD_DIR)/modexample.c SRC_USERMOD_C += $(EXAMPLE_MOD_DIR)/modexample.c
SRC_USERMOD_LIB_C += $(EXAMPLE_MOD_DIR)/utils/algorithm.c SRC_USERMOD_LIB_C += $(EXAMPLE_MOD_DIR)/utils/algorithm.c
Similarly, use ``SRC_USERMOD_CXX`` and ``SRC_USERMOD_LIB_CXX`` for C++ Similarly, use ``SRC_USERMOD_CXX`` and ``SRC_USERMOD_LIB_CXX`` for C++
source files. source files. If you want to include assembly files use ``SRC_USERMOD_LIB_ASM``.
If you have custom compiler options (like ``-I`` to add directories to search If you have custom compiler options (like ``-I`` to add directories to search
for header files), these should be added to ``CFLAGS_USERMOD`` for C code for header files), these should be added to ``CFLAGS_USERMOD`` for C code

View File

@ -98,7 +98,7 @@ Then also edit ``py/lexer.c`` to add the new keyword literal text:
.. code-block:: c .. code-block:: c
:emphasize-lines: 12 :emphasize-lines: 12
STATIC const char *const tok_kw[] = { static const char *const tok_kw[] = {
... ...
"or", "or",
"pass", "pass",
@ -157,7 +157,7 @@ The most relevant method you should know about is this:
mp_compile_to_raw_code(parse_tree, source_file, is_repl, &cm); mp_compile_to_raw_code(parse_tree, source_file, is_repl, &cm);
// Create and return a function object that executes the outer module. // Create and return a function object that executes the outer module.
return mp_make_function_from_raw_code(cm.rc, cm.context, NULL); return mp_make_function_from_proto_fun(cm.rc, cm.context, NULL);
} }
The compiler compiles the code in four passes: scope, stack size, code size and emit. The compiler compiles the code in four passes: scope, stack size, code size and emit.
@ -301,7 +301,7 @@ code statement:
.. code-block:: c .. code-block:: c
STATIC void emit_native_unary_op(emit_t *emit, mp_unary_op_t op) { static void emit_native_unary_op(emit_t *emit, mp_unary_op_t op) {
vtype_kind_t vtype; vtype_kind_t vtype;
emit_pre_pop_reg(emit, &vtype, REG_ARG_2); emit_pre_pop_reg(emit, &vtype, REG_ARG_2);
if (vtype == VTYPE_PYOBJ) { if (vtype == VTYPE_PYOBJ) {

View File

@ -100,7 +100,7 @@ For the stm32 port, the ARM cross-compiler is required:
.. code-block:: bash .. code-block:: bash
$ sudo apt-get install arm-none-eabi-gcc arm-none-eabi-binutils arm-none-eabi-newlib $ sudo apt-get install gcc-arm-none-eabi libnewlib-arm-none-eabi
See the `ARM GCC See the `ARM GCC
toolchain <https://developer.arm.com/downloads/-/arm-gnu-toolchain-downloads>`_ toolchain <https://developer.arm.com/downloads/-/arm-gnu-toolchain-downloads>`_
@ -228,7 +228,7 @@ You can also specify which board to use:
.. code-block:: bash .. code-block:: bash
$ cd ports/stm32 $ cd ports/stm32
$ make submodules $ make BOARD=<board> submodules
$ make BOARD=<board> $ make BOARD=<board>
See `ports/stm32/boards <https://github.com/micropython/micropython/tree/master/ports/stm32/boards>`_ See `ports/stm32/boards <https://github.com/micropython/micropython/tree/master/ports/stm32/boards>`_
@ -245,7 +245,7 @@ that you use a virtual environment:
$ python3 -m venv env $ python3 -m venv env
$ source env/bin/activate $ source env/bin/activate
$ pip install sphinx $ pip install -r docs/requirements.txt
Navigate to the ``docs`` directory: Navigate to the ``docs`` directory:

View File

@ -48,16 +48,16 @@ hypothetical new module ``subsystem`` in the file ``modsubsystem.c``:
#if MICROPY_PY_SUBSYSTEM #if MICROPY_PY_SUBSYSTEM
// info() // info()
STATIC mp_obj_t py_subsystem_info(void) { static mp_obj_t py_subsystem_info(void) {
return MP_OBJ_NEW_SMALL_INT(42); return MP_OBJ_NEW_SMALL_INT(42);
} }
MP_DEFINE_CONST_FUN_OBJ_0(subsystem_info_obj, py_subsystem_info); MP_DEFINE_CONST_FUN_OBJ_0(subsystem_info_obj, py_subsystem_info);
STATIC const mp_rom_map_elem_t mp_module_subsystem_globals_table[] = { static const mp_rom_map_elem_t mp_module_subsystem_globals_table[] = {
{ MP_ROM_QSTR(MP_QSTR___name__), MP_ROM_QSTR(MP_QSTR_subsystem) }, { MP_ROM_QSTR(MP_QSTR___name__), MP_ROM_QSTR(MP_QSTR_subsystem) },
{ MP_ROM_QSTR(MP_QSTR_info), MP_ROM_PTR(&subsystem_info_obj) }, { MP_ROM_QSTR(MP_QSTR_info), MP_ROM_PTR(&subsystem_info_obj) },
}; };
STATIC MP_DEFINE_CONST_DICT(mp_module_subsystem_globals, mp_module_subsystem_globals_table); static MP_DEFINE_CONST_DICT(mp_module_subsystem_globals, mp_module_subsystem_globals_table);
const mp_obj_module_t mp_module_subsystem = { const mp_obj_module_t mp_module_subsystem = {
.base = { &mp_type_module }, .base = { &mp_type_module },

View File

@ -128,7 +128,7 @@ The file ``factorial.c`` contains:
#include "py/dynruntime.h" #include "py/dynruntime.h"
// Helper function to compute factorial // Helper function to compute factorial
STATIC mp_int_t factorial_helper(mp_int_t x) { static mp_int_t factorial_helper(mp_int_t x) {
if (x == 0) { if (x == 0) {
return 1; return 1;
} }
@ -136,7 +136,7 @@ The file ``factorial.c`` contains:
} }
// This is the function which will be called from Python, as factorial(x) // This is the function which will be called from Python, as factorial(x)
STATIC mp_obj_t factorial(mp_obj_t x_obj) { static mp_obj_t factorial(mp_obj_t x_obj) {
// Extract the integer from the MicroPython input object // Extract the integer from the MicroPython input object
mp_int_t x = mp_obj_get_int(x_obj); mp_int_t x = mp_obj_get_int(x_obj);
// Calculate the factorial // Calculate the factorial
@ -145,7 +145,7 @@ The file ``factorial.c`` contains:
return mp_obj_new_int(result); return mp_obj_new_int(result);
} }
// Define a Python reference to the function above // Define a Python reference to the function above
STATIC MP_DEFINE_CONST_FUN_OBJ_1(factorial_obj, factorial); static MP_DEFINE_CONST_FUN_OBJ_1(factorial_obj, factorial);
// This is the entry point and is called when the module is imported // This is the entry point and is called when the module is imported
mp_obj_t mpy_init(mp_obj_fun_bc_t *self, size_t n_args, size_t n_kw, mp_obj_t *args) { mp_obj_t mpy_init(mp_obj_fun_bc_t *self, size_t n_args, size_t n_kw, mp_obj_t *args) {

View File

@ -33,7 +33,7 @@ Variables
MicroPython processes local and global variables differently. Global variables MicroPython processes local and global variables differently. Global variables
are stored and looked up from a global dictionary that is allocated on the heap are stored and looked up from a global dictionary that is allocated on the heap
(note that each module has its own separate dict, so separate namespace). (note that each module has its own separate dict, so separate namespace).
Local variables on the other hand are are stored on the Python value stack, which may Local variables on the other hand are stored on the Python value stack, which may
live on the C stack or on the heap. They are accessed directly by their offset live on the C stack or on the heap. They are accessed directly by their offset
within the Python stack, which is more efficient than a global lookup in a dict. within the Python stack, which is more efficient than a global lookup in a dict.

View File

@ -38,6 +38,7 @@ The basic MicroPython firmware is implemented in the main port file, e.g ``main.
.. code-block:: c .. code-block:: c
#include "py/builtin.h"
#include "py/compile.h" #include "py/compile.h"
#include "py/gc.h" #include "py/gc.h"
#include "py/mperrno.h" #include "py/mperrno.h"
@ -82,7 +83,7 @@ The basic MicroPython firmware is implemented in the main port file, e.g ``main.
} }
// There is no filesystem so opening a file raises an exception. // There is no filesystem so opening a file raises an exception.
mp_lexer_t *mp_lexer_new_from_file(const char *filename) { mp_lexer_t *mp_lexer_new_from_file(qstr filename) {
mp_raise_OSError(MP_ENOENT); mp_raise_OSError(MP_ENOENT);
} }
@ -110,6 +111,9 @@ We also need a Makefile at this point for the port:
shared/runtime/pyexec.c \ shared/runtime/pyexec.c \
shared/runtime/stdout_helpers.c \ shared/runtime/stdout_helpers.c \
# Define source files containung qstrs.
SRC_QSTR += shared/readline/readline.c shared/runtime/pyexec.c
# Define the required object files. # Define the required object files.
OBJ = $(PY_CORE_O) $(addprefix $(BUILD)/, $(SRC_C:.c=.o)) OBJ = $(PY_CORE_O) $(addprefix $(BUILD)/, $(SRC_C:.c=.o))
@ -147,9 +151,6 @@ The following is an example of an ``mpconfigport.h`` file:
#define MICROPY_ERROR_REPORTING (MICROPY_ERROR_REPORTING_TERSE) #define MICROPY_ERROR_REPORTING (MICROPY_ERROR_REPORTING_TERSE)
#define MICROPY_FLOAT_IMPL (MICROPY_FLOAT_IMPL_FLOAT) #define MICROPY_FLOAT_IMPL (MICROPY_FLOAT_IMPL_FLOAT)
// Enable u-modules to be imported with their standard name, like sys.
#define MICROPY_MODULE_WEAK_LINKS (1)
// Fine control over Python builtins, classes, modules, etc. // Fine control over Python builtins, classes, modules, etc.
#define MICROPY_PY_ASYNC_AWAIT (0) #define MICROPY_PY_ASYNC_AWAIT (0)
#define MICROPY_PY_BUILTINS_SET (0) #define MICROPY_PY_BUILTINS_SET (0)
@ -261,17 +262,17 @@ To add a custom module like ``myport``, first add the module definition in a fil
#include "py/runtime.h" #include "py/runtime.h"
STATIC mp_obj_t myport_info(void) { static mp_obj_t myport_info(void) {
mp_printf(&mp_plat_print, "info about my port\n"); mp_printf(&mp_plat_print, "info about my port\n");
return mp_const_none; return mp_const_none;
} }
STATIC MP_DEFINE_CONST_FUN_OBJ_0(myport_info_obj, myport_info); static MP_DEFINE_CONST_FUN_OBJ_0(myport_info_obj, myport_info);
STATIC const mp_rom_map_elem_t myport_module_globals_table[] = { static const mp_rom_map_elem_t myport_module_globals_table[] = {
{ MP_OBJ_NEW_QSTR(MP_QSTR___name__), MP_OBJ_NEW_QSTR(MP_QSTR_myport) }, { MP_OBJ_NEW_QSTR(MP_QSTR___name__), MP_OBJ_NEW_QSTR(MP_QSTR_myport) },
{ MP_ROM_QSTR(MP_QSTR_info), MP_ROM_PTR(&myport_info_obj) }, { MP_ROM_QSTR(MP_QSTR_info), MP_ROM_PTR(&myport_info_obj) },
}; };
STATIC MP_DEFINE_CONST_DICT(myport_module_globals, myport_module_globals_table); static MP_DEFINE_CONST_DICT(myport_module_globals, myport_module_globals_table);
const mp_obj_module_t myport_module = { const mp_obj_module_t myport_module = {
.base = { &mp_type_module }, .base = { &mp_type_module },

View File

@ -0,0 +1,33 @@
.. Preamble section inserted into generated output
Positional-only Parameters
--------------------------
To save code size, many functions that accept keyword arguments in CPython only accept positional arguments in MicroPython.
MicroPython marks positional-only parameters in the same way as CPython, by inserting a ``/`` to mark the end of the positional parameters. Any function whose signature ends in ``/`` takes *only* positional arguments. For more details, see `PEP 570 <https://peps.python.org/pep-0570/>`_.
Example
~~~~~~~
For example, in CPython 3.4 this is the signature of the constructor ``socket.socket``::
socket.socket(family=AF_INET, type=SOCK_STREAM, proto=0, fileno=None)
However, the signature documented in :func:`MicroPython<socket.socket>` is::
socket(af=AF_INET, type=SOCK_STREAM, proto=IPPROTO_TCP, /)
The ``/`` at the end of the parameters indicates that they are all positional-only in MicroPython. The following code works in CPython but not in most MicroPython ports::
import socket
s = socket.socket(type=socket.SOCK_DGRAM)
MicroPython will raise an exception::
TypeError: function doesn't take keyword arguments
The following code will work in both CPython and MicroPython::
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)

View File

@ -18,6 +18,9 @@ working with this board it may be useful to get an overview of the microcontroll
general.rst general.rst
tutorial/index.rst tutorial/index.rst
Note that there are several varieties of ESP32 -- ESP32, ESP32C3, ESP32C6, ESP32S2, ESP32S3 --
supported by MicroPython, with some differences in functionality between them.
Installing MicroPython Installing MicroPython
---------------------- ----------------------
@ -57,14 +60,19 @@ The :mod:`esp32` module::
import esp32 import esp32
esp32.hall_sensor() # read the internal hall sensor
esp32.raw_temperature() # read the internal temperature of the MCU, in Fahrenheit esp32.raw_temperature() # read the internal temperature of the MCU, in Fahrenheit
esp32.ULP() # access to the Ultra-Low-Power Co-processor esp32.ULP() # access to the Ultra-Low-Power Co-processor, not on ESP32C3/C6
Note that the temperature sensor in the ESP32 will typically read higher than Note that the temperature sensor in the ESP32 will typically read higher than
ambient due to the IC getting warm while it runs. This effect can be minimised ambient due to the IC getting warm while it runs. This effect can be minimised
by reading the temperature sensor immediately after waking up from sleep. by reading the temperature sensor immediately after waking up from sleep.
ESP32C3, ESP32C6, ESP32S2, and ESP32S3 also have an internal temperature sensor available.
It is implemented a bit differently to the ESP32 and returns the temperature in
Celsius::
esp32.mcu_temperature() # read the internal temperature of the MCU, in Celsius
Networking Networking
---------- ----------
@ -81,7 +89,7 @@ The :mod:`network` module::
wlan.isconnected() # check if the station is connected to an AP wlan.isconnected() # check if the station is connected to an AP
wlan.connect('ssid', 'key') # connect to an AP wlan.connect('ssid', 'key') # connect to an AP
wlan.config('mac') # get the interface's MAC address wlan.config('mac') # get the interface's MAC address
wlan.ifconfig() # get the interface's IP/netmask/gw/DNS addresses wlan.ipconfig('addr4') # get the interface's IPv4 addresses
ap = network.WLAN(network.AP_IF) # create access-point interface ap = network.WLAN(network.AP_IF) # create access-point interface
ap.config(ssid='ESP-AP') # set the SSID of the access point ap.config(ssid='ESP-AP') # set the SSID of the access point
@ -99,10 +107,10 @@ A useful function for connecting to your local WiFi network is::
wlan.connect('ssid', 'key') wlan.connect('ssid', 'key')
while not wlan.isconnected(): while not wlan.isconnected():
pass pass
print('network config:', wlan.ifconfig()) print('network config:', wlan.ipconfig('addr4'))
Once the network is established the :mod:`socket <socket>` module can be used Once the network is established the :mod:`socket <socket>` module can be used
to create and use TCP/UDP sockets as usual, and the ``urequests`` module for to create and use TCP/UDP sockets as usual, and the ``requests`` module for
convenient HTTP requests. convenient HTTP requests.
After a call to ``wlan.connect()``, the device will by default retry to connect After a call to ``wlan.connect()``, the device will by default retry to connect
@ -122,13 +130,14 @@ To use the wired interfaces one has to specify the pins and mode ::
lan = network.LAN(mdc=PIN_MDC, ...) # Set the pin and mode configuration lan = network.LAN(mdc=PIN_MDC, ...) # Set the pin and mode configuration
lan.active(True) # activate the interface lan.active(True) # activate the interface
lan.ifconfig() # get the interface's IP/netmask/gw/DNS addresses lan.ipconfig('addr4') # get the interface's IPv4 addresses
The keyword arguments for the constructor defining the PHY type and interface are: The keyword arguments for the constructor defining the PHY type and interface are:
- mdc=pin-object # set the mdc and mdio pins. - mdc=pin-object # set the mdc and mdio pins.
- mdio=pin-object - mdio=pin-object
- reset=pin-object # set the reset pin of the PHY device.
- power=pin-object # set the pin which switches the power of the PHY device. - power=pin-object # set the pin which switches the power of the PHY device.
- phy_type=<type> # Select the PHY device type. Supported devices are PHY_LAN8710, - phy_type=<type> # Select the PHY device type. Supported devices are PHY_LAN8710,
PHY_LAN8720, PH_IP101, PHY_RTL8201, PHY_DP83848 and PHY_KSZ8041 PHY_LAN8720, PH_IP101, PHY_RTL8201, PHY_DP83848 and PHY_KSZ8041
@ -137,19 +146,11 @@ The keyword arguments for the constructor defining the PHY type and interface ar
or output. Suitable values are Pin.IN and Pin.OUT. or output. Suitable values are Pin.IN and Pin.OUT.
- ref_clk=pin-object # defines the Pin used for ref_clk. - ref_clk=pin-object # defines the Pin used for ref_clk.
The options ref_clk_mode and ref_clk require at least esp-idf version 4.4. For
earlier esp-idf versions, these parameters must be defined by kconfig board options.
These are working configurations for LAN interfaces of popular boards:: These are working configurations for LAN interfaces of popular boards::
# Olimex ESP32-GATEWAY: power controlled by Pin(5) # Olimex ESP32-GATEWAY: power controlled by Pin(5)
# Olimex ESP32 PoE and ESP32-PoE ISO: power controlled by Pin(12) # Olimex ESP32 PoE and ESP32-PoE ISO: power controlled by Pin(12)
lan = network.LAN(mdc=machine.Pin(23), mdio=machine.Pin(18), power=machine.Pin(5),
phy_type=network.PHY_LAN8720, phy_addr=0)
# or with dynamic ref_clk pin configuration
lan = network.LAN(mdc=machine.Pin(23), mdio=machine.Pin(18), power=machine.Pin(5), lan = network.LAN(mdc=machine.Pin(23), mdio=machine.Pin(18), power=machine.Pin(5),
phy_type=network.PHY_LAN8720, phy_addr=0, phy_type=network.PHY_LAN8720, phy_addr=0,
ref_clk=machine.Pin(17), ref_clk_mode=machine.Pin.OUT) ref_clk=machine.Pin(17), ref_clk_mode=machine.Pin.OUT)
@ -159,21 +160,17 @@ These are working configurations for LAN interfaces of popular boards::
lan = network.LAN(mdc=machine.Pin(23), mdio=machine.Pin(18), lan = network.LAN(mdc=machine.Pin(23), mdio=machine.Pin(18),
phy_type=network.PHY_LAN8720, phy_addr=1, power=None) phy_type=network.PHY_LAN8720, phy_addr=1, power=None)
# Wireless-Tag's WT32-ETH01 v1.4
lan = network.LAN(mdc=machine.Pin(23), mdio=machine.Pin(18),
phy_type=network.PHY_LAN8720, phy_addr=1,
power=machine.Pin(16))
# Espressif ESP32-Ethernet-Kit_A_V1.2 # Espressif ESP32-Ethernet-Kit_A_V1.2
lan = network.LAN(id=0, mdc=Pin(23), mdio=Pin(18), power=Pin(5), lan = network.LAN(id=0, mdc=Pin(23), mdio=Pin(18), power=Pin(5),
phy_type=network.PHY_IP101, phy_addr=1) phy_type=network.PHY_IP101, phy_addr=1)
A suitable definition of the PHY interface in a sdkconfig.board file is::
CONFIG_ETH_PHY_INTERFACE_RMII=y
CONFIG_ETH_RMII_CLK_OUTPUT=y
CONFIG_ETH_RMII_CLK_OUT_GPIO=17
CONFIG_LWIP_LOCAL_HOSTNAME="ESP32_POE"
The value assigned to CONFIG_ETH_RMII_CLK_OUT_GPIO may vary depending on the
board's wiring.
Delay and timing Delay and timing
---------------- ----------------
@ -305,8 +302,8 @@ Use the :ref:`machine.PWM <machine.PWM>` class::
from machine import Pin, PWM from machine import Pin, PWM
pwm0 = PWM(Pin(0)) # create PWM object from a pin pwm0 = PWM(Pin(0), freq=5000, duty_u16=32768) # create PWM object from a pin
freq = pwm0.freq() # get current frequency (default 5kHz) freq = pwm0.freq() # get current frequency
pwm0.freq(1000) # set PWM frequency from 1Hz to 40MHz pwm0.freq(1000) # set PWM frequency from 1Hz to 40MHz
duty = pwm0.duty() # get current duty cycle, range 0-1023 (default 512, 50%) duty = pwm0.duty() # get current duty cycle, range 0-1023 (default 512, 50%)
@ -343,6 +340,19 @@ possible at the same frequency.
See more examples in the :ref:`esp32_pwm` tutorial. See more examples in the :ref:`esp32_pwm` tutorial.
DAC (digital to analog conversion)
----------------------------------
On the ESP32, DAC functionality is available on pins 25, 26.
On the ESP32S2, DAC functionality is available on pins 17, 18.
Use the DAC::
from machine import DAC, Pin
dac = DAC(Pin(25)) # create an DAC object acting on a pin
dac.write(128) # set a raw analog value in the range 0-255, 50% now
ADC (analog to digital conversion) ADC (analog to digital conversion)
---------------------------------- ----------------------------------
@ -650,15 +660,15 @@ SD card
See :ref:`machine.SDCard <machine.SDCard>`. :: See :ref:`machine.SDCard <machine.SDCard>`. ::
import machine, os import machine, os, vfs
# Slot 2 uses pins sck=18, cs=5, miso=19, mosi=23 # Slot 2 uses pins sck=18, cs=5, miso=19, mosi=23
sd = machine.SDCard(slot=2) sd = machine.SDCard(slot=2)
os.mount(sd, '/sd') # mount vfs.mount(sd, '/sd') # mount
os.listdir('/sd') # list directory contents os.listdir('/sd') # list directory contents
os.umount('/sd') # eject vfs.umount('/sd') # eject
RMT RMT
--- ---

Binary file not shown.

After

Width:  |  Height:  |  Size: 200 KiB

View File

@ -17,7 +17,7 @@ Requirements
The first thing you need is a board with an ESP32 chip. The MicroPython The first thing you need is a board with an ESP32 chip. The MicroPython
software supports the ESP32 chip itself and any board should work. The main software supports the ESP32 chip itself and any board should work. The main
characteristic of a board is how the GPIO pins are connected to the outside characteristic of a board is how the GPIO pins are connected to the outside
world, and whether it includes a built-in USB-serial convertor to make the world, and whether it includes a built-in USB-serial converter to make the
UART available to your PC. UART available to your PC.
Names of pins will be given in this tutorial using the chip names (eg GPIO2) Names of pins will be given in this tutorial using the chip names (eg GPIO2)
@ -59,7 +59,7 @@ bootloader mode, and second you need to copy across the firmware. The exact
procedure for these steps is highly dependent on the particular board and you will procedure for these steps is highly dependent on the particular board and you will
need to refer to its documentation for details. need to refer to its documentation for details.
Fortunately, most boards have a USB connector, a USB-serial convertor, and the DTR Fortunately, most boards have a USB connector, a USB-serial converter, and the DTR
and RTS pins wired in a special way then deploying the firmware should be easy as and RTS pins wired in a special way then deploying the firmware should be easy as
all steps can be done automatically. Boards that have such features all steps can be done automatically. Boards that have such features
include the Adafruit Feather HUZZAH32, M5Stack, Wemos LOLIN32, and TinyPICO include the Adafruit Feather HUZZAH32, M5Stack, Wemos LOLIN32, and TinyPICO
@ -104,7 +104,7 @@ Serial prompt
Once you have the firmware on the device you can access the REPL (Python prompt) Once you have the firmware on the device you can access the REPL (Python prompt)
over UART0 (GPIO1=TX, GPIO3=RX), which might be connected to a USB-serial over UART0 (GPIO1=TX, GPIO3=RX), which might be connected to a USB-serial
convertor, depending on your board. The baudrate is 115200. converter, depending on your board. The baudrate is 115200.
From here you can now follow the ESP8266 tutorial, because these two Espressif chips From here you can now follow the ESP8266 tutorial, because these two Espressif chips
are very similar when it comes to using MicroPython on them. The ESP8266 tutorial are very similar when it comes to using MicroPython on them. The ESP8266 tutorial
@ -124,7 +124,7 @@ after it, here are troubleshooting recommendations:
* The flashing instructions above use flashing speed of 460800 baud, which is * The flashing instructions above use flashing speed of 460800 baud, which is
good compromise between speed and stability. However, depending on your good compromise between speed and stability. However, depending on your
module/board, USB-UART convertor, cables, host OS, etc., the above baud module/board, USB-UART converter, cables, host OS, etc., the above baud
rate may be too high and lead to errors. Try a more common 115200 baud rate may be too high and lead to errors. Try a more common 115200 baud
rate instead in such cases. rate instead in such cases.

View File

@ -32,6 +32,18 @@ the prescaler of the MCPWM0 peripheral.
mem32[MCPWM0] = 0x55 # change PWM_CLK_PRESCALE mem32[MCPWM0] = 0x55 # change PWM_CLK_PRESCALE
print(hex(mem32[MCPWM0])) # read PWM_CLK_CFG_REG print(hex(mem32[MCPWM0])) # read PWM_CLK_CFG_REG
The specific addresses will be different on different ESP32
models. For example, ESP32-S3 uses these values:
.. code-block:: python3
DR_REG_DPORT_BASE = const(0x600C_0000)
DPORT_PERIP_CLK_EN0_REG = const(DR_REG_DPORT_BASE + 0x0018)
DPORT_PERIP_RST_EN0_REG = const(DR_REG_DPORT_BASE + 0x0020)
DPORT_PWM0_CLK_EN = const(1 << 17)
MCPWM0 = const(0x6001_E000 + 0x0004)
...
Note that before a peripheral can be used its clock must be enabled and it must Note that before a peripheral can be used its clock must be enabled and it must
be taken out of reset. In the above example the following registers are used be taken out of reset. In the above example the following registers are used
for this: for this:
@ -42,3 +54,83 @@ for this:
The MCPWM0 peripheral is in bit position 17 of the above two registers, hence The MCPWM0 peripheral is in bit position 17 of the above two registers, hence
the value of ``DPORT_PWM0_CLK_EN``. the value of ``DPORT_PWM0_CLK_EN``.
Synchronous access to pins directly via registers
-------------------------------------------------
The following code shows how to access pins directly via registers. It has been
tested on a generic ESP32 board. It configures pins 16, 17, 32 and 33 in output
mode via registers, and switches pin output values via registers. Pins 16 and
17 are switched simultaneously.
.. code-block:: python3
from micropython import const
from machine import mem32, Pin
GPIO_OUT_REG = const(0x3FF44004) # GPIO 0-31 output register
GPIO_OUT1_REG = const(0x3FF44010) # GPIO 32-39 output register
GPIO_ENABLE_REG = const(0x3FF44020) # GPIO 0-31 output enable register
GPIO_ENABLE1_REG = const(0x3FF4402C) # GPIO 32-39 output enable register
M16 = 1 << 16 # Pin(16) bit mask
M17 = 1 << 17 # Pin(17) bit mask
M32 = 1 << (32-32) # Pin(32) bit mask
M33 = 1 << (33-32) # Pin(33) bit mask
# Enable pin output mode like
# p16 = Pin(16, mode=Pin.OUT)
# p17 = Pin(17, mode=Pin.OUT)
# p32 = Pin(32, mode=Pin.OUT)
# p33 = Pin(33, mode=Pin.OUT)
mem32[GPIO_ENABLE_REG] = mem32[GPIO_ENABLE_REG] | M16 | M17
mem32[GPIO_ENABLE1_REG] = mem32[GPIO_ENABLE1_REG] | M32 | M33
print(hex(mem32[GPIO_OUT_REG]), hex(mem32[GPIO_OUT1_REG]))
# Set outputs to 1 like
# p16(1)
# p17(1)
# p32(1)
# p33(1)
mem32[GPIO_OUT_REG] = mem32[GPIO_OUT_REG] | M16 | M17
mem32[GPIO_OUT1_REG] = mem32[GPIO_OUT1_REG] | M32 | M33
print(hex(mem32[GPIO_OUT_REG]), hex(mem32[GPIO_OUT1_REG]))
# Set outputs to 0 like
# p16(0)
# p17(0)
# p32(0)
# p33(0)
mem32[GPIO_OUT_REG] = mem32[GPIO_OUT_REG] & ~(M16 | M17)
mem32[GPIO_OUT1_REG] = mem32[GPIO_OUT1_REG] & ~(M32 | M33)
print(hex(mem32[GPIO_OUT_REG]), hex(mem32[GPIO_OUT1_REG]))
while True:
# Set outputs to 1
mem32[GPIO_OUT_REG] = mem32[GPIO_OUT_REG] | M16 | M17
mem32[GPIO_OUT1_REG] = mem32[GPIO_OUT1_REG] | M32 | M33
# Set outputs to 0
mem32[GPIO_OUT_REG] = mem32[GPIO_OUT_REG] & ~(M16 | M17)
mem32[GPIO_OUT1_REG] = mem32[GPIO_OUT1_REG] & ~(M32 | M33)
Output is::
0x0 0x0
0x30000 0x3
0x0 0x0
Pins 16 and 17 are switched synchronously:
.. image:: img/mem32_gpio_output.jpg
Same image on pins 32 and 33.
Note that pins 34-36 and 39 are inputs only. Also pins 1 and 3 are Tx, Rx of the REPL UART,
pins 6-11 are connected to the built-in SPI flash.

View File

@ -50,7 +50,7 @@ low all of the time.
* Example of a smooth frequency change:: * Example of a smooth frequency change::
from utime import sleep from time import sleep
from machine import Pin, PWM from machine import Pin, PWM
F_MIN = 500 F_MIN = 500
@ -75,7 +75,7 @@ low all of the time.
* Example of a smooth duty change:: * Example of a smooth duty change::
from utime import sleep from time import sleep
from machine import Pin, PWM from machine import Pin, PWM
DUTY_MAX = 2**16 - 1 DUTY_MAX = 2**16 - 1

View File

@ -59,7 +59,7 @@ The :mod:`network` module::
wlan.isconnected() # check if the station is connected to an AP wlan.isconnected() # check if the station is connected to an AP
wlan.connect('ssid', 'key') # connect to an AP wlan.connect('ssid', 'key') # connect to an AP
wlan.config('mac') # get the interface's MAC address wlan.config('mac') # get the interface's MAC address
wlan.ifconfig() # get the interface's IP/netmask/gw/DNS addresses wlan.ipconfig('addr4') # get the interface's IPv4 addresses
ap = network.WLAN(network.AP_IF) # create access-point interface ap = network.WLAN(network.AP_IF) # create access-point interface
ap.active(True) # activate the interface ap.active(True) # activate the interface
@ -76,7 +76,7 @@ A useful function for connecting to your local WiFi network is::
wlan.connect('ssid', 'key') wlan.connect('ssid', 'key')
while not wlan.isconnected(): while not wlan.isconnected():
pass pass
print('network config:', wlan.ifconfig()) print('network config:', wlan.ipconfig('addr4'))
Once the network is established the :mod:`socket <socket>` module can be used Once the network is established the :mod:`socket <socket>` module can be used
to create and use TCP/UDP sockets as usual. to create and use TCP/UDP sockets as usual.

View File

@ -18,7 +18,7 @@ The first thing you need is a board with an ESP8266 chip. The MicroPython
software supports the ESP8266 chip itself and any board should work. The main software supports the ESP8266 chip itself and any board should work. The main
characteristic of a board is how much flash it has, how the GPIO pins are characteristic of a board is how much flash it has, how the GPIO pins are
connected to the outside world, and whether it includes a built-in USB-serial connected to the outside world, and whether it includes a built-in USB-serial
convertor to make the UART available to your PC. converter to make the UART available to your PC.
The minimum requirement for flash size is 1Mbyte. There is also a special The minimum requirement for flash size is 1Mbyte. There is also a special
build for boards with 512KB, but it is highly limited comparing to the build for boards with 512KB, but it is highly limited comparing to the
@ -70,7 +70,7 @@ need to put your device in boot-loader mode, and second you need to copy across
the firmware. The exact procedure for these steps is highly dependent on the the firmware. The exact procedure for these steps is highly dependent on the
particular board and you will need to refer to its documentation for details. particular board and you will need to refer to its documentation for details.
If you have a board that has a USB connector, a USB-serial convertor, and has If you have a board that has a USB connector, a USB-serial converter, and has
the DTR and RTS pins wired in a special way then deploying the firmware should the DTR and RTS pins wired in a special way then deploying the firmware should
be easy as all steps can be done automatically. Boards that have such features be easy as all steps can be done automatically. Boards that have such features
include the Adafruit Feather HUZZAH and NodeMCU boards. include the Adafruit Feather HUZZAH and NodeMCU boards.
@ -128,7 +128,7 @@ Serial prompt
Once you have the firmware on the device you can access the REPL (Python prompt) Once you have the firmware on the device you can access the REPL (Python prompt)
over UART0 (GPIO1=TX, GPIO3=RX), which might be connected to a USB-serial over UART0 (GPIO1=TX, GPIO3=RX), which might be connected to a USB-serial
convertor, depending on your board. The baudrate is 115200. The next part of converter, depending on your board. The baudrate is 115200. The next part of
the tutorial will discuss the prompt in more detail. the tutorial will discuss the prompt in more detail.
WiFi WiFi
@ -137,7 +137,7 @@ WiFi
After a fresh install and boot the device configures itself as a WiFi access After a fresh install and boot the device configures itself as a WiFi access
point (AP) that you can connect to. The ESSID is of the form MicroPython-xxxxxx point (AP) that you can connect to. The ESSID is of the form MicroPython-xxxxxx
where the x's are replaced with part of the MAC address of your device (so will where the x's are replaced with part of the MAC address of your device (so will
be the same everytime, and most likely different for all ESP8266 chips). The be the same every time, and most likely different for all ESP8266 chips). The
password for the WiFi is micropythoN (note the upper-case N). Its IP address password for the WiFi is micropythoN (note the upper-case N). Its IP address
will be 192.168.4.1 once you connect to its network. WiFi configuration will will be 192.168.4.1 once you connect to its network. WiFi configuration will
be discussed in more detail later in the tutorial. be discussed in more detail later in the tutorial.
@ -169,7 +169,7 @@ after it, here are troubleshooting recommendations:
* The flashing instructions above use flashing speed of 460800 baud, which is * The flashing instructions above use flashing speed of 460800 baud, which is
good compromise between speed and stability. However, depending on your good compromise between speed and stability. However, depending on your
module/board, USB-UART convertor, cables, host OS, etc., the above baud module/board, USB-UART converter, cables, host OS, etc., the above baud
rate may be too high and lead to errors. Try a more common 115200 baud rate may be too high and lead to errors. Try a more common 115200 baud
rate instead in such cases. rate instead in such cases.

View File

@ -19,10 +19,10 @@ You can check if the interfaces are active by::
You can also check the network settings of the interface by:: You can also check the network settings of the interface by::
>>> ap_if.ifconfig() >>> ap_if.ipconfig('addr4')
('192.168.4.1', '255.255.255.0', '192.168.4.1', '8.8.8.8') ('192.168.4.1', '255.255.255.0')
The returned values are: IP address, netmask, gateway, DNS. The returned values are: IP address and netmask.
Configuration of the WiFi Configuration of the WiFi
------------------------- -------------------------
@ -45,8 +45,8 @@ To check if the connection is established use::
Once established you can check the IP address:: Once established you can check the IP address::
>>> sta_if.ifconfig() >>> sta_if.ipconfig('addr4')
('192.168.0.2', '255.255.255.0', '192.168.0.1', '8.8.8.8') ('192.168.0.2', '255.255.255.0')
You can then disable the access-point interface if you no longer need it:: You can then disable the access-point interface if you no longer need it::
@ -64,7 +64,7 @@ connect to your WiFi network::
sta_if.connect('<ssid>', '<key>') sta_if.connect('<ssid>', '<key>')
while not sta_if.isconnected(): while not sta_if.isconnected():
pass pass
print('network config:', sta_if.ifconfig()) print('network config:', sta_if.ipconfig('addr4'))
Sockets Sockets
------- -------

View File

@ -13,7 +13,7 @@ REPL over the serial port
The REPL is always available on the UART0 serial peripheral, which is connected The REPL is always available on the UART0 serial peripheral, which is connected
to the pins GPIO1 for TX and GPIO3 for RX. The baudrate of the REPL is 115200. to the pins GPIO1 for TX and GPIO3 for RX. The baudrate of the REPL is 115200.
If your board has a USB-serial convertor on it then you should be able to access If your board has a USB-serial converter on it then you should be able to access
the REPL directly from your PC. Otherwise you will need to have a way of the REPL directly from your PC. Otherwise you will need to have a way of
communicating with the UART. communicating with the UART.

View File

@ -75,7 +75,7 @@ Classes
Returns the string representation of the array, called as ``str(a)`` or ``repr(a)``` Returns the string representation of the array, called as ``str(a)`` or ``repr(a)```
(where ``a`` is an ``array``). Returns the string ``"array(<type>, [<elements>])"``, (where ``a`` is an ``array``). Returns the string ``"array(<type>, [<elements>])"``,
where ``<type>`` is the type code letter for the array and ``<elements>`` is a comma where ``<type>`` is the type code letter for the array and ``<elements>`` is a comma
seperated list of the elements of the array. separated list of the elements of the array.
**Note:** ``__repr__`` cannot be called directly (``a.__repr__()`` fails) and **Note:** ``__repr__`` cannot be called directly (``a.__repr__()`` fails) and
is not present in ``__dict__``, however ``str(a)`` and ``repr(a)`` both work. is not present in ``__dict__``, however ``str(a)`` and ``repr(a)`` both work.

View File

@ -1,7 +1,7 @@
:mod:`uasyncio` --- asynchronous I/O scheduler :mod:`asyncio` --- asynchronous I/O scheduler
============================================== =============================================
.. module:: uasyncio .. module:: asyncio
:synopsis: asynchronous I/O scheduler for writing concurrent code :synopsis: asynchronous I/O scheduler for writing concurrent code
|see_cpython_module| |see_cpython_module|
@ -9,27 +9,27 @@
Example:: Example::
import uasyncio import asyncio
async def blink(led, period_ms): async def blink(led, period_ms):
while True: while True:
led.on() led.on()
await uasyncio.sleep_ms(5) await asyncio.sleep_ms(5)
led.off() led.off()
await uasyncio.sleep_ms(period_ms) await asyncio.sleep_ms(period_ms)
async def main(led1, led2): async def main(led1, led2):
uasyncio.create_task(blink(led1, 700)) asyncio.create_task(blink(led1, 700))
uasyncio.create_task(blink(led2, 400)) asyncio.create_task(blink(led2, 400))
await uasyncio.sleep_ms(10_000) await asyncio.sleep_ms(10_000)
# Running on a pyboard # Running on a pyboard
from pyb import LED from pyb import LED
uasyncio.run(main(LED(1), LED(2))) asyncio.run(main(LED(1), LED(2)))
# Running on a generic board # Running on a generic board
from machine import Pin from machine import Pin
uasyncio.run(main(Pin(1), Pin(2))) asyncio.run(main(Pin(1), Pin(2)))
Core functions Core functions
-------------- --------------
@ -71,9 +71,9 @@ Additional functions
than *timeout* seconds. If *awaitable* is not a task then a task will be than *timeout* seconds. If *awaitable* is not a task then a task will be
created from it. created from it.
If a timeout occurs, it cancels the task and raises ``uasyncio.TimeoutError``: If a timeout occurs, it cancels the task and raises ``asyncio.TimeoutError``:
this should be trapped by the caller. The task receives this should be trapped by the caller. The task receives
``uasyncio.CancelledError`` which may be ignored or trapped using ``try...except`` ``asyncio.CancelledError`` which may be ignored or trapped using ``try...except``
or ``try...finally`` to run cleanup code. or ``try...finally`` to run cleanup code.
Returns the return value of *awaitable*. Returns the return value of *awaitable*.
@ -108,7 +108,7 @@ class Task
.. method:: Task.cancel() .. method:: Task.cancel()
Cancel the task by injecting ``uasyncio.CancelledError`` into it. The task may Cancel the task by injecting ``asyncio.CancelledError`` into it. The task may
ignore this exception. Cleanup code may be run by trapping it, or via ignore this exception. Cleanup code may be run by trapping it, or via
``try ... finally``. ``try ... finally``.
@ -148,9 +148,8 @@ class ThreadSafeFlag
.. class:: ThreadSafeFlag() .. class:: ThreadSafeFlag()
Create a new flag which can be used to synchronise a task with code running Create a new flag which can be used to synchronise a task with code running
outside the uasyncio loop, such as other threads, IRQs, or scheduler outside the asyncio loop, such as other threads, IRQs, or scheduler
callbacks. Flags start in the cleared state. The class does not currently callbacks. Flags start in the cleared state.
work under the Unix build of MicroPython.
.. method:: ThreadSafeFlag.set() .. method:: ThreadSafeFlag.set()
@ -201,10 +200,12 @@ class Lock
TCP stream connections TCP stream connections
---------------------- ----------------------
.. function:: open_connection(host, port) .. function:: open_connection(host, port, ssl=None)
Open a TCP connection to the given *host* and *port*. The *host* address will be Open a TCP connection to the given *host* and *port*. The *host* address will be
resolved using `socket.getaddrinfo`, which is currently a blocking call. resolved using `socket.getaddrinfo`, which is currently a blocking call.
If *ssl* is a `ssl.SSLContext` object, this context is used to create the transport;
if *ssl* is ``True``, a default context is used.
Returns a pair of streams: a reader and a writer stream. Returns a pair of streams: a reader and a writer stream.
Will raise a socket-specific ``OSError`` if the host could not be resolved or if Will raise a socket-specific ``OSError`` if the host could not be resolved or if
@ -212,12 +213,14 @@ TCP stream connections
This is a coroutine. This is a coroutine.
.. function:: start_server(callback, host, port, backlog=5) .. function:: start_server(callback, host, port, backlog=5, ssl=None)
Start a TCP server on the given *host* and *port*. The *callback* will be Start a TCP server on the given *host* and *port*. The *callback* will be
called with incoming, accepted connections, and be passed 2 arguments: reader called with incoming, accepted connections, and be passed 2 arguments: reader
and writer streams for the connection. and writer streams for the connection.
If *ssl* is a `ssl.SSLContext` object, this context is used to create the transport.
Returns a `Server` object. Returns a `Server` object.
This is a coroutine. This is a coroutine.

View File

@ -44,7 +44,7 @@ Configuration
Get or set configuration values of the BLE interface. To get a value the Get or set configuration values of the BLE interface. To get a value the
parameter name should be quoted as a string, and just one parameter is parameter name should be quoted as a string, and just one parameter is
queried at a time. To set values use the keyword syntax, and one ore more queried at a time. To set values use the keyword syntax, and one or more
parameter can be set at a time. parameter can be set at a time.
Currently supported values are: Currently supported values are:
@ -312,7 +312,7 @@ Broadcaster Role (Advertiser)
in all broadcasts, and *resp_data* is send in reply to an active scan. in all broadcasts, and *resp_data* is send in reply to an active scan.
**Note:** if *adv_data* (or *resp_data*) is ``None``, then the data passed **Note:** if *adv_data* (or *resp_data*) is ``None``, then the data passed
to the previous call to ``gap_advertise`` will be re-used. This allows a to the previous call to ``gap_advertise`` will be reused. This allows a
broadcaster to resume advertising with just ``gap_advertise(interval_us)``. broadcaster to resume advertising with just ``gap_advertise(interval_us)``.
To clear the advertising payload pass an empty ``bytes``, i.e. ``b''``. To clear the advertising payload pass an empty ``bytes``, i.e. ``b''``.
@ -722,7 +722,7 @@ Pairing and bonding
and ``_IRQ_SET_SECRET`` events. and ``_IRQ_SET_SECRET`` events.
**Note:** This is currently only supported when using the NimBLE stack on **Note:** This is currently only supported when using the NimBLE stack on
STM32 and Unix (not ESP32). ESP32, STM32 and Unix.
.. method:: BLE.gap_pair(conn_handle, /) .. method:: BLE.gap_pair(conn_handle, /)

View File

@ -82,6 +82,10 @@ Functions and types
In MicroPython, `byteorder` parameter must be positional (this is In MicroPython, `byteorder` parameter must be positional (this is
compatible with CPython). compatible with CPython).
.. note:: The optional ``signed`` kwarg from CPython is not supported.
MicroPython currently converts negative integers as signed,
and positive as unsigned. (:ref:`Details <cpydiff_types_int_to_bytes>`.)
.. function:: isinstance() .. function:: isinstance()
.. function:: issubclass() .. function:: issubclass()

View File

@ -18,7 +18,9 @@ Classes
appends and pops from either side of the deque. New deques are created appends and pops from either side of the deque. New deques are created
using the following arguments: using the following arguments:
- *iterable* must be the empty tuple, and the new deque is created empty. - *iterable* is an iterable used to populate the deque when it is
created. It can be an empty tuple or list to create a deque that
is initially empty.
- *maxlen* must be specified and the deque will be bounded to this - *maxlen* must be specified and the deque will be bounded to this
maximum length. Once the deque is full, any new items added will maximum length. Once the deque is full, any new items added will
@ -26,18 +28,37 @@ Classes
- The optional *flags* can be 1 to check for overflow when adding items. - The optional *flags* can be 1 to check for overflow when adding items.
As well as supporting `bool` and `len`, deque objects have the following Deque objects support `bool`, `len`, iteration and subscript load and store.
methods: They also have the following methods:
.. method:: deque.append(x) .. method:: deque.append(x)
Add *x* to the right side of the deque. Add *x* to the right side of the deque.
Raises IndexError if overflow checking is enabled and there is no more room left. Raises ``IndexError`` if overflow checking is enabled and there is
no more room in the queue.
.. method:: deque.appendleft(x)
Add *x* to the left side of the deque.
Raises ``IndexError`` if overflow checking is enabled and there is
no more room in the queue.
.. method:: deque.pop()
Remove and return an item from the right side of the deque.
Raises ``IndexError`` if no items are present.
.. method:: deque.popleft() .. method:: deque.popleft()
Remove and return an item from the left side of the deque. Remove and return an item from the left side of the deque.
Raises IndexError if no items are present. Raises ``IndexError`` if no items are present.
.. method:: deque.extend(iterable)
Extend the deque by appending all the items from *iterable* to
the right of the deque.
Raises ``IndexError`` if overflow checking is enabled and there is
no more room in the deque.
.. function:: namedtuple(name, fields) .. function:: namedtuple(name, fields)

182
docs/library/deflate.rst Normal file
View File

@ -0,0 +1,182 @@
:mod:`deflate` -- deflate compression & decompression
=====================================================
.. module:: deflate
:synopsis: deflate compression & decompression
This module allows compression and decompression of binary data with the
`DEFLATE algorithm <https://en.wikipedia.org/wiki/DEFLATE>`_
(commonly used in the zlib library and gzip archiver).
**Availability:**
* Added in MicroPython v1.21.
* Decompression: Enabled via the ``MICROPY_PY_DEFLATE`` build option, on by default
on ports with the "extra features" level or higher (which is most boards).
* Compression: Enabled via the ``MICROPY_PY_DEFLATE_COMPRESS`` build option, on
by default on ports with the "full features" level or higher (generally this means
you need to build your own firmware to enable this).
Classes
-------
.. class:: DeflateIO(stream, format=AUTO, wbits=0, close=False, /)
This class can be used to wrap a *stream* which is any
:term:`stream-like <stream>` object such as a file, socket, or stream
(including :class:`io.BytesIO`). It is itself a stream and implements the
standard read/readinto/write/close methods.
The *stream* must be a blocking stream. Non-blocking streams are currently
not supported.
The *format* can be set to any of the constants defined below, and defaults
to ``AUTO`` which for decompressing will auto-detect gzip or zlib streams,
and for compressing it will generate a raw stream.
The *wbits* parameter sets the base-2 logarithm of the DEFLATE dictionary
window size. So for example, setting *wbits* to ``10`` sets the window size
to 1024 bytes. Valid values are ``5`` to ``15`` inclusive (corresponding to
window sizes of 32 to 32k bytes).
If *wbits* is set to ``0`` (the default), then for compression a window size
of 256 bytes will be used (as if *wbits* was set to 8). For decompression, it
depends on the format:
* ``RAW`` will use 256 bytes (corresponding to *wbits* set to 8).
* ``ZLIB`` (or ``AUTO`` with zlib detected) will use the value from the zlib
header.
* ``GZIP`` (or ``AUTO`` with gzip detected) will use 32 kilobytes
(corresponding to *wbits* set to 15).
See the :ref:`window size <deflate_wbits>` notes below for more information
about the window size, zlib, and gzip streams.
If *close* is set to ``True`` then the underlying stream will be closed
automatically when the :class:`deflate.DeflateIO` stream is closed. This is
useful if you want to return a :class:`deflate.DeflateIO` stream that wraps
another stream and not have the caller need to know about managing the
underlying stream.
If compression is enabled, a given :class:`deflate.DeflateIO` instance
supports both reading and writing. For example, a bidirectional stream like
a socket can be wrapped, which allows for compression/decompression in both
directions.
Constants
---------
.. data:: deflate.AUTO
deflate.RAW
deflate.ZLIB
deflate.GZIP
Supported values for the *format* parameter.
Examples
--------
A typical use case for :class:`deflate.DeflateIO` is to read or write a compressed
file from storage:
.. code:: python
import deflate
# Writing a zlib-compressed stream (uses the default window size of 256 bytes).
with open("data.gz", "wb") as f:
with deflate.DeflateIO(f, deflate.ZLIB) as d:
# Use d.write(...) etc
# Reading a zlib-compressed stream (auto-detect window size).
with open("data.z", "rb") as f:
with deflate.DeflateIO(f, deflate.ZLIB) as d:
# Use d.read(), d.readinto(), etc.
Because :class:`deflate.DeflateIO` is a stream, it can be used for example
with :meth:`json.dump` and :meth:`json.load` (and any other places streams can
be used):
.. code:: python
import deflate, json
# Write a dictionary as JSON in gzip format, with a
# small (64 byte) window size.
config = { ... }
with open("config.gz", "wb") as f:
with deflate.DeflateIO(f, deflate.GZIP, 6) as f:
json.dump(config, f)
# Read back that dictionary.
with open("config.gz", "rb") as f:
with deflate.DeflateIO(f, deflate.GZIP, 6) as f:
config = json.load(f)
If your source data is not in a stream format, you can use :class:`io.BytesIO`
to turn it into a stream suitable for use with :class:`deflate.DeflateIO`:
.. code:: python
import deflate, io
# Decompress a bytes/bytearray value.
compressed_data = get_data_z()
with deflate.DeflateIO(io.BytesIO(compressed_data), deflate.ZLIB) as d:
decompressed_data = d.read()
# Compress a bytes/bytearray value.
uncompressed_data = get_data()
stream = io.BytesIO()
with deflate.DeflateIO(stream, deflate.ZLIB) as d:
d.write(uncompressed_data)
compressed_data = stream.getvalue()
.. _deflate_wbits:
Deflate window size
-------------------
The window size limits how far back in the stream the (de)compressor can
reference. Increasing the window size will improve compression, but will require
more memory and make the compressor slower.
If an input stream was compressed a given window size, then `DeflateIO`
using a smaller window size will fail mid-way during decompression with
:exc:`OSError`, but only if a back-reference actually refers back further
than the decompressor's window size. This means it may be possible to decompress
with a smaller window size. For example, this would trivially be the case if the
original uncompressed data is shorter than the window size.
Decompression
~~~~~~~~~~~~~
The zlib format includes a header which specifies the window size that was used
to compress the data. This indicates the maximum window size required to
decompress this stream. If this header value is less than the specified *wbits*
value (or if *wbits* is unset), then the header value will be used.
The gzip format does not include the window size in the header, and assumes that
all gzip compressors (e.g. the ``gzip`` utility, or CPython's implementation of
:class:`gzip.GzipFile`) use the maximum window size of 32kiB. For this reason,
if the *wbits* parameter is not set, the decompressor will use a 32 kiB window
size (corresponding to *wbits* set to 15). This means that to be able to
decompress an arbitrary gzip stream, you must have at least this much RAM
available. If you control the source data, consider instead using the zlib
format with a smaller window size.
The raw format has no header and therefore does not include any information
about the window size. If *wbits* is not set, then it will default to a window
size of 256 bytes, which may not be large enough for a given stream. Therefore
it is recommended that you should always explicitly set *wbits* if using the raw
format.
Compression
~~~~~~~~~~~
For compression, MicroPython will default to a window size of 256 bytes for all
formats. This provides a reasonable amount of compression with minimal memory
usage and fast compression time, and will generate output that will work with
any decompressor.

View File

@ -62,12 +62,35 @@ Functions
.. function:: flash_erase(sector_no) .. function:: flash_erase(sector_no)
.. function:: osdebug(level) .. function:: osdebug(uart_no)
Turn esp os debugging messages on or off. .. note:: This is the ESP8266 form of this function.
The *level* parameter sets the threshold for the log messages for all esp components. Change the level of OS serial debug log messages. On boot,
The log levels are defined as constants: OS serial debug log messages are disabled.
``uart_no`` is the number of the UART peripheral which should receive
OS-level output, or ``None`` to disable OS serial debug log messages.
.. function:: osdebug(uart_no, [level])
:no-index:
.. note:: This is the ESP32 form of this function.
Change the level of OS serial debug log messages. On boot, OS
serial debug log messages are limited to Error output only.
The behaviour of this function depends on the arguments passed to it. The
following combinations are supported:
``osdebug(None)`` restores the default OS debug log message level
(``LOG_ERROR``).
``osdebug(0)`` enables all available OS debug log messages (in the
default build configuration this is ``LOG_INFO``).
``osdebug(0, level)`` sets the OS debug log message level to the
specified value. The log levels are defined as constants:
* ``LOG_NONE`` -- No log output * ``LOG_NONE`` -- No log output
* ``LOG_ERROR`` -- Critical errors, software module can not recover on its own * ``LOG_ERROR`` -- Critical errors, software module can not recover on its own
@ -77,6 +100,15 @@ Functions
* ``LOG_VERBOSE`` -- Bigger chunks of debugging information, or frequent messages * ``LOG_VERBOSE`` -- Bigger chunks of debugging information, or frequent messages
which can potentially flood the output which can potentially flood the output
.. note:: ``LOG_DEBUG`` and ``LOG_VERBOSE`` are not compiled into the
MicroPython binary by default, to save size. A custom build with a
modified "``sdkconfig``" source file is needed to see any output
at these log levels.
.. note:: Log output on ESP32 is automatically suspended in "Raw REPL" mode,
to prevent communications issues. This means OS level logging is never
seen when using ``mpremote run`` and similar tools.
.. function:: set_native_code_location(start, length) .. function:: set_native_code_location(start, length)
**Note**: ESP8266 only **Note**: ESP8266 only

View File

@ -44,10 +44,6 @@ Functions
Read the raw value of the internal temperature sensor, returning an integer. Read the raw value of the internal temperature sensor, returning an integer.
.. function:: hall_sensor()
Read the raw value of the internal Hall sensor, returning an integer.
.. function:: idf_heap_info(capabilities) .. function:: idf_heap_info(capabilities)
Returns information about the ESP-IDF heap memory regions. One of them contains Returns information about the ESP-IDF heap memory regions. One of them contains
@ -55,8 +51,6 @@ Functions
buffers and other data. This data is useful to get a sense of how much memory buffers and other data. This data is useful to get a sense of how much memory
is available to ESP-IDF and the networking stack in particular. It may shed is available to ESP-IDF and the networking stack in particular. It may shed
some light on situations where ESP-IDF operations fail due to allocation failures. some light on situations where ESP-IDF operations fail due to allocation failures.
The information returned is *not* useful to troubleshoot Python allocation failures,
use `micropython.mem_info()` instead.
The capabilities parameter corresponds to ESP-IDF's ``MALLOC_CAP_XXX`` values but the The capabilities parameter corresponds to ESP-IDF's ``MALLOC_CAP_XXX`` values but the
two most useful ones are predefined as `esp32.HEAP_DATA` for data heap regions and two most useful ones are predefined as `esp32.HEAP_DATA` for data heap regions and
@ -72,6 +66,21 @@ Functions
[(240, 0, 0, 0), (7288, 0, 0, 0), (16648, 4, 4, 4), (79912, 35712, 35512, 35108), [(240, 0, 0, 0), (7288, 0, 0, 0), (16648, 4, 4, 4), (79912, 35712, 35512, 35108),
(15072, 15036, 15036, 15036), (113840, 0, 0, 0)] (15072, 15036, 15036, 15036), (113840, 0, 0, 0)]
.. note:: Free IDF heap memory in the `esp32.HEAP_DATA` region is available
to be automatically added to the MicroPython heap to prevent a
MicroPython allocation from failing. However, the information returned
here is otherwise *not* useful to troubleshoot Python allocation
failures. :func:`micropython.mem_info()` and :func:`gc.mem_free()` should
be used instead:
The "max new split" value in :func:`micropython.mem_info()` output
corresponds to the largest free block of ESP-IDF heap that could be
automatically added on demand to the MicroPython heap.
The result of :func:`gc.mem_free()` is the total of the current "free"
and "max new split" values printed by :func:`micropython.mem_info()`.
Flash partitions Flash partitions
---------------- ----------------
@ -105,12 +114,17 @@ methods to enable over-the-air (OTA) updates.
These methods implement the simple and :ref:`extended These methods implement the simple and :ref:`extended
<block-device-interface>` block protocol defined by <block-device-interface>` block protocol defined by
:class:`os.AbstractBlockDev`. :class:`vfs.AbstractBlockDev`.
.. method:: Partition.set_boot() .. method:: Partition.set_boot()
Sets the partition as the boot partition. Sets the partition as the boot partition.
.. note:: Do not enter :func:`deepsleep<machine.deepsleep>` after changing
the OTA boot partition, without first performing a hard
:func:`reset<machine.reset>` or power cycle. This ensures the bootloader
will validate the new image before booting.
.. method:: Partition.get_next_update() .. method:: Partition.get_next_update()
Gets the next update partition after this one, and returns a new Partition object. Gets the next update partition after this one, and returns a new Partition object.
@ -126,7 +140,7 @@ methods to enable over-the-air (OTA) updates.
and an ``OSError(-261)`` is raised if called on firmware that doesn't have the and an ``OSError(-261)`` is raised if called on firmware that doesn't have the
feature enabled. feature enabled.
It is OK to call ``mark_app_valid_cancel_rollback`` on every boot and it is not It is OK to call ``mark_app_valid_cancel_rollback`` on every boot and it is not
necessary when booting firmare that was loaded using esptool. necessary when booting firmware that was loaded using esptool.
Constants Constants
~~~~~~~~~ ~~~~~~~~~
@ -179,7 +193,7 @@ numbers specified in ``write_pulses`` are multiplied by the resolution to
define the pulses. define the pulses.
``clock_div`` is an 8-bit divider (0-255) and each pulse can be defined by ``clock_div`` is an 8-bit divider (0-255) and each pulse can be defined by
multiplying the resolution by a 15-bit (0-32,768) number. There are eight multiplying the resolution by a 15-bit (1-``PULSE_MAX``) number. There are eight
channels (0-7) and each can have a different clock divider. channels (0-7) and each can have a different clock divider.
So, in the example above, the 80MHz clock is divided by 8. Thus the So, in the example above, the 80MHz clock is divided by 8. Thus the
@ -212,7 +226,7 @@ For more details see Espressif's `ESP-IDF RMT documentation.
``100``) and the output level to apply the carrier to (a boolean as per ``100``) and the output level to apply the carrier to (a boolean as per
*idle_level*). *idle_level*).
.. method:: RMT.source_freq() .. classmethod:: RMT.source_freq()
Returns the source clock frequency. Currently the source clock is not Returns the source clock frequency. Currently the source clock is not
configurable so this will always return 80MHz. configurable so this will always return 80MHz.
@ -250,10 +264,10 @@ For more details see Espressif's `ESP-IDF RMT documentation.
**Mode 3:** *duration* and *data* are lists or tuples of equal length, **Mode 3:** *duration* and *data* are lists or tuples of equal length,
specifying individual durations and the output level for each. specifying individual durations and the output level for each.
Durations are in integer units of the channel resolution (as described Durations are in integer units of the channel resolution (as
above), between 1 and 32767 units. Output levels are any value that can described above), between 1 and ``PULSE_MAX`` units. Output levels
be converted to a boolean, with ``True`` representing high voltage and are any value that can be converted to a boolean, with ``True``
``False`` representing low. representing high voltage and ``False`` representing low.
If transmission of an earlier sequence is in progress then this method will If transmission of an earlier sequence is in progress then this method will
block until that transmission is complete before beginning the new sequence. block until that transmission is complete before beginning the new sequence.
@ -276,9 +290,24 @@ For more details see Espressif's `ESP-IDF RMT documentation.
Passing in no argument will not change the channel. This function returns Passing in no argument will not change the channel. This function returns
the current channel number. the current channel number.
Constants
---------
.. data:: RMT.PULSE_MAX
Maximum integer that can be set for a pulse duration.
Ultra-Low-Power co-processor Ultra-Low-Power co-processor
---------------------------- ----------------------------
This class gives access to the Ultra Low Power (ULP) co-processor on the ESP32,
ESP32-S2 and ESP32-S3 chips.
.. warning::
This class does not provide access to the RISCV ULP co-processor available
on the ESP32-S2 and ESP32-S3 chips.
.. class:: ULP() .. class:: ULP()
This class provides access to the Ultra-Low-Power co-processor. This class provides access to the Ultra-Low-Power co-processor.

936
docs/library/espnow.rst Normal file
View File

@ -0,0 +1,936 @@
:mod:`espnow` --- support for the ESP-NOW wireless protocol
===========================================================
.. module:: espnow
:synopsis: ESP-NOW wireless protocol support
This module provides an interface to the `ESP-NOW <https://www.espressif.com/
en/products/software/esp-now/overview>`_ protocol provided by Espressif on
ESP32 and ESP8266 devices (`API docs <https://docs.espressif.com/
projects/esp-idf/en/latest/api-reference/network/esp_now.html>`_).
Table of Contents:
------------------
- `Introduction`_
- `Configuration`_
- `Sending and Receiving Data`_
- `Peer Management`_
- `Callback Methods`_
- `Exceptions`_
- `Constants`_
- `Wifi Signal Strength (RSSI) - (ESP32 Only)`_
- `Supporting asyncio`_
- `Broadcast and Multicast`_
- `ESPNow and Wifi Operation`_
- `ESPNow and Sleep Modes`_
Introduction
------------
ESP-NOW is a connection-less wireless communication protocol supporting:
- Direct communication between up to 20 registered peers:
- Without the need for a wireless access point (AP),
- Encrypted and unencrypted communication (up to 6 encrypted peers),
- Message sizes up to 250 bytes,
- Can operate alongside Wifi operation (:doc:`network.WLAN<network.WLAN>`) on
ESP32 and ESP8266 devices.
It is especially useful for small IoT networks, latency sensitive or power
sensitive applications (such as battery operated devices) and for long-range
communication between devices (hundreds of metres).
This module also supports tracking the Wifi signal strength (RSSI) of peer
devices.
A simple example would be:
**Sender:** ::
import network
import espnow
# A WLAN interface must be active to send()/recv()
sta = network.WLAN(network.STA_IF) # Or network.AP_IF
sta.active(True)
sta.disconnect() # For ESP8266
e = espnow.ESPNow()
e.active(True)
peer = b'\xbb\xbb\xbb\xbb\xbb\xbb' # MAC address of peer's wifi interface
e.add_peer(peer) # Must add_peer() before send()
e.send(peer, "Starting...")
for i in range(100):
e.send(peer, str(i)*20, True)
e.send(peer, b'end')
**Receiver:** ::
import network
import espnow
# A WLAN interface must be active to send()/recv()
sta = network.WLAN(network.STA_IF)
sta.active(True)
sta.disconnect() # Because ESP8266 auto-connects to last Access Point
e = espnow.ESPNow()
e.active(True)
while True:
host, msg = e.recv()
if msg: # msg == None if timeout in recv()
print(host, msg)
if msg == b'end':
break
class ESPNow
------------
Constructor
-----------
.. class:: ESPNow()
Returns the singleton ESPNow object. As this is a singleton, all calls to
`espnow.ESPNow()` return a reference to the same object.
.. note::
Some methods are available only on the ESP32 due to code size
restrictions on the ESP8266 and differences in the Espressif API.
Configuration
-------------
.. method:: ESPNow.active([flag])
Initialise or de-initialise the ESP-NOW communication protocol depending on
the value of the ``flag`` optional argument.
.. data:: Arguments:
- *flag*: Any python value which can be converted to a boolean type.
- ``True``: Prepare the software and hardware for use of the ESP-NOW
communication protocol, including:
- initialise the ESPNow data structures,
- allocate the recv data buffer,
- invoke esp_now_init() and
- register the send and recv callbacks.
- ``False``: De-initialise the Espressif ESP-NOW software stack
(esp_now_deinit()), disable callbacks, deallocate the recv
data buffer and deregister all peers.
If *flag* is not provided, return the current status of the ESPNow
interface.
.. data:: Returns:
``True`` if interface is currently *active*, else ``False``.
.. method:: ESPNow.config(param=value, ...)
ESPNow.config('param') (ESP32 only)
Set or get configuration values of the ESPNow interface. To set values, use
the keyword syntax, and one or more parameters can be set at a time. To get
a value the parameter name should be quoted as a string, and just one
parameter is queried at a time.
**Note:** *Getting* parameters is not supported on the ESP8266.
.. data:: Options:
*rxbuf*: (default=526) Get/set the size in bytes of the internal
buffer used to store incoming ESPNow packet data. The default size is
selected to fit two max-sized ESPNow packets (250 bytes) with associated
mac_address (6 bytes), a message byte count (1 byte) and RSSI data plus
buffer overhead. Increase this if you expect to receive a lot of large
packets or expect bursty incoming traffic.
**Note:** The recv buffer is allocated by `ESPNow.active()`. Changing
this value will have no effect until the next call of
`ESPNow.active(True)<ESPNow.active()>`.
*timeout_ms*: (default=300,000) Default timeout (in milliseconds)
for receiving ESPNow messages. If *timeout_ms* is less than zero, then
wait forever. The timeout can also be provided as arg to
`recv()`/`irecv()`/`recvinto()`.
*rate*: (ESP32 only, IDF>=4.3.0 only) Set the transmission speed for
ESPNow packets. Must be set to a number from the allowed numeric values
in `enum wifi_phy_rate_t
<https://docs.espressif.com/projects/esp-idf/en/v4.4.1/esp32/
api-reference/network/esp_wifi.html#_CPPv415wifi_phy_rate_t>`_.
.. data:: Returns:
``None`` or the value of the parameter being queried.
.. data:: Raises:
- ``OSError(num, "ESP_ERR_ESPNOW_NOT_INIT")`` if not initialised.
- ``ValueError()`` on invalid configuration options or values.
Sending and Receiving Data
--------------------------
A wifi interface (``network.STA_IF`` or ``network.AP_IF``) must be
`active()<network.WLAN.active>` before messages can be sent or received,
but it is not necessary to connect or configure the WLAN interface.
For example::
import network
sta = network.WLAN(network.STA_IF)
sta.active(True)
sta.disconnect() # For ESP8266
**Note:** The ESP8266 has a *feature* that causes it to automatically reconnect
to the last wifi Access Point when set `active(True)<network.WLAN.active>` (even
after reboot/reset). This reduces the reliability of receiving ESP-NOW messages
(see `ESPNow and Wifi Operation`_). You can avoid this by calling
`disconnect()<network.WLAN.disconnect>` after
`active(True)<network.WLAN.active>`.
.. method:: ESPNow.send(mac, msg[, sync])
ESPNow.send(msg) (ESP32 only)
Send the data contained in ``msg`` to the peer with given network ``mac``
address. In the second form, ``mac=None`` and ``sync=True``. The peer must
be registered with `ESPNow.add_peer()<ESPNow.add_peer()>` before the
message can be sent.
.. data:: Arguments:
- *mac*: byte string exactly ``espnow.ADDR_LEN`` (6 bytes) long or
``None``. If *mac* is ``None`` (ESP32 only) the message will be sent
to all registered peers, except any broadcast or multicast MAC
addresses.
- *msg*: string or byte-string up to ``espnow.MAX_DATA_LEN`` (250)
bytes long.
- *sync*:
- ``True``: (default) send ``msg`` to the peer(s) and wait for a
response (or not).
- ``False`` send ``msg`` and return immediately. Responses from the
peers will be discarded.
.. data:: Returns:
``True`` if ``sync=False`` or if ``sync=True`` and *all* peers respond,
else ``False``.
.. data:: Raises:
- ``OSError(num, "ESP_ERR_ESPNOW_NOT_INIT")`` if not initialised.
- ``OSError(num, "ESP_ERR_ESPNOW_NOT_FOUND")`` if peer is not registered.
- ``OSError(num, "ESP_ERR_ESPNOW_IF")`` the wifi interface is not
`active()<network.WLAN.active>`.
- ``OSError(num, "ESP_ERR_ESPNOW_NO_MEM")`` internal ESP-NOW buffers are
full.
- ``ValueError()`` on invalid values for the parameters.
**Note**: A peer will respond with success if its wifi interface is
`active()<network.WLAN.active>` and set to the same channel as the sender,
regardless of whether it has initialised it's ESP-NOW system or is
actively listening for ESP-NOW traffic (see the Espressif ESP-NOW docs).
.. method:: ESPNow.recv([timeout_ms])
Wait for an incoming message and return the ``mac`` address of the peer and
the message. **Note**: It is **not** necessary to register a peer (using
`add_peer()<ESPNow.add_peer()>`) to receive a message from that peer.
.. data:: Arguments:
- *timeout_ms*: (Optional): May have the following values.
- ``0``: No timeout. Return immediately if no data is available;
- ``> 0``: Specify a timeout value in milliseconds;
- ``< 0``: Do not timeout, ie. wait forever for new messages; or
- ``None`` (or not provided): Use the default timeout value set with
`ESPNow.config()`.
.. data:: Returns:
- ``(None, None)`` if timeout is reached before a message is received, or
- ``[mac, msg]``: where:
- ``mac`` is a bytestring containing the address of the device which
sent the message, and
- ``msg`` is a bytestring containing the message.
.. data:: Raises:
- ``OSError(num, "ESP_ERR_ESPNOW_NOT_INIT")`` if not initialised.
- ``OSError(num, "ESP_ERR_ESPNOW_IF")`` if the wifi interface is not
`active()<network.WLAN.active>`.
- ``ValueError()`` on invalid *timeout_ms* values.
`ESPNow.recv()` will allocate new storage for the returned list and the
``peer`` and ``msg`` bytestrings. This can lead to memory fragmentation if
the data rate is high. See `ESPNow.irecv()` for a memory-friendly
alternative.
.. method:: ESPNow.irecv([timeout_ms])
Works like `ESPNow.recv()` but will reuse internal bytearrays to store the
return values: ``[mac, msg]``, so that no new memory is allocated on each
call.
.. data:: Arguments:
*timeout_ms*: (Optional) Timeout in milliseconds (see `ESPNow.recv()`).
.. data:: Returns:
- As for `ESPNow.recv()`, except that ``msg`` is a bytearray, instead of
a bytestring. On the ESP8266, ``mac`` will also be a bytearray.
.. data:: Raises:
- See `ESPNow.recv()`.
**Note:** You may also read messages by iterating over the ESPNow object,
which will use the `irecv()` method for alloc-free reads, eg: ::
import espnow
e = espnow.ESPNow(); e.active(True)
for mac, msg in e:
print(mac, msg)
if mac is None: # mac, msg will equal (None, None) on timeout
break
.. method:: ESPNow.recvinto(data[, timeout_ms])
Wait for an incoming message and return the length of the message in bytes.
This is the low-level method used by both `recv()<ESPNow.recv()>` and
`irecv()` to read messages.
.. data:: Arguments:
*data*: A list of at least two elements, ``[peer, msg]``. ``msg`` must
be a bytearray large enough to hold the message (250 bytes). On the
ESP8266, ``peer`` should be a bytearray of 6 bytes. The MAC address of
the sender and the message will be stored in these bytearrays (see Note
on ESP32 below).
*timeout_ms*: (Optional) Timeout in milliseconds (see `ESPNow.recv()`).
.. data:: Returns:
- Length of message in bytes or 0 if *timeout_ms* is reached before a
message is received.
.. data:: Raises:
- See `ESPNow.recv()`.
**Note:** On the ESP32:
- It is unnecessary to provide a bytearray in the first element of the
``data`` list because it will be replaced by a reference to a unique
``peer`` address in the **peer device table** (see `ESPNow.peers_table`).
- If the list is at least 4 elements long, the rssi and timestamp values
will be saved as the 3rd and 4th elements.
.. method:: ESPNow.any()
Check if data is available to be read with `ESPNow.recv()`.
For more sophisticated querying of available characters use `select.poll()`::
import select
import espnow
e = espnow.ESPNow()
poll = select.poll()
poll.register(e, select.POLLIN)
poll.poll(timeout)
.. data:: Returns:
``True`` if data is available to be read, else ``False``.
.. method:: ESPNow.stats() (ESP32 only)
.. data:: Returns:
A 5-tuple containing the number of packets sent/received/lost:
``(tx_pkts, tx_responses, tx_failures, rx_packets, rx_dropped_packets)``
Incoming packets are *dropped* when the recv buffers are full. To reduce
packet loss, increase the ``rxbuf`` config parameters and ensure you are
reading messages as quickly as possible.
**Note**: Dropped packets will still be acknowledged to the sender as
received.
Peer Management
---------------
On ESP32 devices, the Espressif ESP-NOW software requires that other devices
(peers) must be *registered* using `add_peer()` before we can
`send()<ESPNow.send()>` them messages (this is *not* enforced on ESP8266
devices). It is **not** necessary to register a peer to receive an
un-encrypted message from that peer.
**Encrypted messages**: To receive an *encrypted* message, the receiving device
must first register the sender and use the same encryption keys as the sender
(PMK and LMK) (see `set_pmk()` and `add_peer()`.
.. method:: ESPNow.set_pmk(pmk)
Set the Primary Master Key (PMK) which is used to encrypt the Local Master
Keys (LMK) for encrypting messages. If this is not set, a default PMK is
used by the underlying Espressif ESP-NOW software stack.
**Note:** messages will only be encrypted if *lmk* is also set in
`ESPNow.add_peer()` (see `Security
<https://docs.espressif.com/projects/esp-idf/en/latest/
esp32/api-reference/network/esp_now.html#security>`_ in the Espressif API
docs).
.. data:: Arguments:
*pmk*: Must be a byte string, bytearray or string of length
`espnow.KEY_LEN` (16 bytes).
.. data:: Returns:
``None``
.. data:: Raises:
``ValueError()`` on invalid *pmk* values.
.. method:: ESPNow.add_peer(mac, [lmk], [channel], [ifidx], [encrypt])
ESPNow.add_peer(mac, param=value, ...) (ESP32 only)
Add/register the provided *mac* address as a peer. Additional parameters may
also be specified as positional or keyword arguments (any parameter set to
``None`` will be set to it's default value):
.. data:: Arguments:
- *mac*: The MAC address of the peer (as a 6-byte byte-string).
- *lmk*: The Local Master Key (LMK) key used to encrypt data
transfers with this peer (unless the *encrypt* parameter is set to
``False``). Must be:
- a byte-string or bytearray or string of length ``espnow.KEY_LEN``
(16 bytes), or
- any non ``True`` python value (default= ``b''``), signifying an
*empty* key which will disable encryption.
- *channel*: The wifi channel (2.4GHz) to communicate with this peer.
Must be an integer from 0 to 14. If channel is set to 0 the current
channel of the wifi device will be used. (default=0)
- *ifidx*: (ESP32 only) Index of the wifi interface which will be
used to send data to this peer. Must be an integer set to
``network.STA_IF`` (=0) or ``network.AP_IF`` (=1).
(default=0/``network.STA_IF``). See `ESPNow and Wifi Operation`_
below for more information.
- *encrypt*: (ESP32 only) If set to ``True`` data exchanged with
this peer will be encrypted with the PMK and LMK. (default =
``True`` if *lmk* is set to a valid key, else ``False``)
**ESP8266**: Keyword args may not be used on the ESP8266.
**Note:** The maximum number of peers which may be registered is 20
(`espnow.MAX_TOTAL_PEER_NUM`), with a maximum of 6
(`espnow.MAX_ENCRYPT_PEER_NUM`) of those peers with encryption enabled
(see `ESP_NOW_MAX_ENCRYPT_PEER_NUM <https://docs.espressif.com/
projects/esp-idf/en/latest/esp32/api-reference/network/
esp_now.html#c.ESP_NOW_MAX_ENCRYPT_PEER_NUM>`_ in the Espressif API
docs).
.. data:: Raises:
- ``OSError(num, "ESP_ERR_ESPNOW_NOT_INIT")`` if not initialised.
- ``OSError(num, "ESP_ERR_ESPNOW_EXIST")`` if *mac* is already
registered.
- ``OSError(num, "ESP_ERR_ESPNOW_FULL")`` if too many peers are
already registered.
- ``ValueError()`` on invalid keyword args or values.
.. method:: ESPNow.del_peer(mac)
Deregister the peer associated with the provided *mac* address.
.. data:: Returns:
``None``
.. data:: Raises:
- ``OSError(num, "ESP_ERR_ESPNOW_NOT_INIT")`` if not initialised.
- ``OSError(num, "ESP_ERR_ESPNOW_NOT_FOUND")`` if *mac* is not
registered.
- ``ValueError()`` on invalid *mac* values.
.. method:: ESPNow.get_peer(mac) (ESP32 only)
Return information on a registered peer.
.. data:: Returns:
``(mac, lmk, channel, ifidx, encrypt)``: a tuple of the "peer
info" associated with the given *mac* address.
.. data:: Raises:
- ``OSError(num, "ESP_ERR_ESPNOW_NOT_INIT")`` if not initialised.
- ``OSError(num, "ESP_ERR_ESPNOW_NOT_FOUND")`` if *mac* is not
registered.
- ``ValueError()`` on invalid *mac* values.
.. method:: ESPNow.peer_count() (ESP32 only)
Return the number of registered peers:
- ``(peer_num, encrypt_num)``: where
- ``peer_num`` is the number of peers which are registered, and
- ``encrypt_num`` is the number of encrypted peers.
.. method:: ESPNow.get_peers() (ESP32 only)
Return the "peer info" parameters for all the registered peers (as a tuple
of tuples).
.. method:: ESPNow.mod_peer(mac, lmk, [channel], [ifidx], [encrypt]) (ESP32 only)
ESPNow.mod_peer(mac, 'param'=value, ...) (ESP32 only)
Modify the parameters of the peer associated with the provided *mac*
address. Parameters may be provided as positional or keyword arguments
(see `ESPNow.add_peer()`). Any parameter that is not set (or set to
``None``) will retain the existing value for that parameter.
Callback Methods
----------------
.. method:: ESPNow.irq(callback) (ESP32 only)
Set a callback function to be called *as soon as possible* after a message has
been received from another ESPNow device. The callback function will be called
with the `ESPNow` instance object as an argument. For more reliable operation,
it is recommended to read out as many messages as are available when the
callback is invoked and to set the read timeout to zero, eg: ::
def recv_cb(e):
while True: # Read out all messages waiting in the buffer
mac, msg = e.irecv(0) # Don't wait if no messages left
if mac is None:
return
print(mac, msg)
e.irq(recv_cb)
The `irq()<ESPNow.irq()>` callback method is an alternative method for
processing incoming messages, especially if the data rate is moderate
and the device is *not too busy* but there are some caveats:
- The scheduler stack *can* overflow and callbacks will be missed if
packets are arriving at a sufficient rate or if other MicroPython components
(eg, bluetooth, machine.Pin.irq(), machine.timer, i2s, ...) are exercising
the scheduler stack. This method may be less reliable for dealing with
bursts of messages, or high throughput or on a device which is busy dealing
with other hardware operations.
- For more information on *scheduled* function callbacks see:
`micropython.schedule()<micropython.schedule>`.
Constants
---------
.. data:: espnow.MAX_DATA_LEN(=250)
espnow.KEY_LEN(=16)
espnow.ADDR_LEN(=6)
espnow.MAX_TOTAL_PEER_NUM(=20)
espnow.MAX_ENCRYPT_PEER_NUM(=6)
Exceptions
----------
If the underlying Espressif ESP-NOW software stack returns an error code,
the MicroPython espnow module will raise an ``OSError(errnum, errstring)``
exception where ``errstring`` is set to the name of one of the error codes
identified in the
`Espressif ESP-NOW docs
<https://docs.espressif.com/projects/esp-idf/en/latest/
api-reference/network/esp_now.html#api-reference>`_. For example::
try:
e.send(peer, 'Hello')
except OSError as err:
if len(err.args) < 2:
raise err
if err.args[1] == 'ESP_ERR_ESPNOW_NOT_INIT':
e.active(True)
elif err.args[1] == 'ESP_ERR_ESPNOW_NOT_FOUND':
e.add_peer(peer)
elif err.args[1] == 'ESP_ERR_ESPNOW_IF':
network.WLAN(network.STA_IF).active(True)
else:
raise err
Wifi Signal Strength (RSSI) - (ESP32 only)
------------------------------------------
The ESPNow object maintains a **peer device table** which contains the signal
strength and timestamp of the last received message from all hosts. The **peer
device table** can be accessed using `ESPNow.peers_table` and can be used to
track device proximity and identify *nearest neighbours* in a network of peer
devices. This feature is **not** available on ESP8266 devices.
.. data:: ESPNow.peers_table
A reference to the **peer device table**: a dict of known peer devices
and rssi values::
{peer: [rssi, time_ms], ...}
where:
- ``peer`` is the peer MAC address (as `bytes`);
- ``rssi`` is the wifi signal strength in dBm (-127 to 0) of the last
message received from the peer; and
- ``time_ms`` is the time the message was received (in milliseconds since
system boot - wraps every 12 days).
Example::
>>> e.peers_table
{b'\xaa\xaa\xaa\xaa\xaa\xaa': [-31, 18372],
b'\xbb\xbb\xbb\xbb\xbb\xbb': [-43, 12541]}
**Note**: the ``mac`` addresses returned by `recv()` are references to
the ``peer`` key values in the **peer device table**.
**Note**: RSSI and timestamp values in the device table are updated only
when the message is read by the application.
Supporting asyncio
------------------
A supplementary module (`aioespnow`) is available to provide
:doc:`asyncio<asyncio>` support.
**Note:** Asyncio support is available on all ESP32 targets as well as those
ESP8266 boards which include the asyncio module (ie. ESP8266 devices with at
least 2MB flash memory).
A small async server example::
import network
import aioespnow
import asyncio
# A WLAN interface must be active to send()/recv()
network.WLAN(network.STA_IF).active(True)
e = aioespnow.AIOESPNow() # Returns AIOESPNow enhanced with async support
e.active(True)
peer = b'\xbb\xbb\xbb\xbb\xbb\xbb'
e.add_peer(peer)
# Send a periodic ping to a peer
async def heartbeat(e, peer, period=30):
while True:
if not await e.asend(peer, b'ping'):
print("Heartbeat: peer not responding:", peer)
else:
print("Heartbeat: ping", peer)
await asyncio.sleep(period)
# Echo any received messages back to the sender
async def echo_server(e):
async for mac, msg in e:
print("Echo:", msg)
try:
await e.asend(mac, msg)
except OSError as err:
if len(err.args) > 1 and err.args[1] == 'ESP_ERR_ESPNOW_NOT_FOUND':
e.add_peer(mac)
await e.asend(mac, msg)
async def main(e, peer, timeout, period):
asyncio.create_task(heartbeat(e, peer, period))
asyncio.create_task(echo_server(e))
await asyncio.sleep(timeout)
asyncio.run(main(e, peer, 120, 10))
.. module:: aioespnow
:synopsis: ESP-NOW :doc:`asyncio` support
.. class:: AIOESPNow()
The `AIOESPNow` class inherits all the methods of `ESPNow<espnow.ESPNow>`
and extends the interface with the following async methods.
.. method:: async AIOESPNow.arecv()
Asyncio support for `ESPNow.recv()`. Note that this method does not take a
timeout value as argument.
.. method:: async AIOESPNow.airecv()
Asyncio support for `ESPNow.irecv()`. Note that this method does not take a
timeout value as argument.
.. method:: async AIOESPNow.asend(mac, msg, sync=True)
async AIOESPNow.asend(msg)
Asyncio support for `ESPNow.send()`.
.. method:: AIOESPNow._aiter__() / async AIOESPNow.__anext__()
`AIOESPNow` also supports reading incoming messages by asynchronous
iteration using ``async for``; eg::
e = AIOESPNow()
e.active(True)
async def recv_till_halt(e):
async for mac, msg in e:
print(mac, msg)
if msg == b'halt':
break
asyncio.run(recv_till_halt(e))
Broadcast and Multicast
-----------------------
All active ESPNow clients will receive messages sent to their MAC address and
all devices (**except ESP8266 devices**) will also receive messages sent to the
*broadcast* MAC address (``b'\xff\xff\xff\xff\xff\xff'``) or any multicast
MAC address.
All ESPNow devices (including ESP8266 devices) can also send messages to the
broadcast MAC address or any multicast MAC address.
To `send()<ESPNow.send()>` a broadcast message, the broadcast (or
multicast) MAC address must first be registered using
`add_peer()<ESPNow.add_peer()>`. `send()<ESPNow.send()>` will always return
``True`` for broadcasts, regardless of whether any devices receive the
message. It is not permitted to encrypt messages sent to the broadcast
address or any multicast address.
**Note**: `ESPNow.send(None, msg)<ESPNow.send()>` will send to all registered
peers *except* the broadcast address. To send a broadcast or multicast
message, you must specify the broadcast (or multicast) MAC address as the
peer. For example::
bcast = b'\xff' * 6
e.add_peer(bcast)
e.send(bcast, "Hello World!")
ESPNow and Wifi Operation
-------------------------
ESPNow messages may be sent and received on any `active()<network.WLAN.active>`
`WLAN<network.WLAN()>` interface (``network.STA_IF`` or ``network.AP_IF``), even
if that interface is also connected to a wifi network or configured as an access
point. When an ESP32 or ESP8266 device connects to a Wifi Access Point (see
`ESP32 Quickref <../esp32/quickref.html#networking>`__) the following things
happen which affect ESPNow communications:
1. Wifi Power-saving Mode (`network.WLAN.PM_PERFORMANCE`)
is automatically activated and
2. The radio on the esp device changes wifi ``channel`` to match the channel
used by the Access Point.
**Wifi Power-saving Mode:** (see `Espressif Docs <https://docs.espressif.com/
projects/esp-idf/en/latest/esp32/api-guides/
wifi.html#esp32-wi-fi-power-saving-mode>`_) The power saving mode causes the
device to turn off the radio periodically (typically for hundreds of
milliseconds), making it unreliable in receiving ESPNow messages. This can be
resolved by either of:
1. Disabling the power-saving mode on the STA_IF interface;
- Use ``sta.config(pm=sta.PM_NONE)``
2. Turning on the AP_IF interface, which will disable the power saving mode.
However, the device will then be advertising an active wifi access point.
- You **may** also choose to send your messages via the AP_IF interface, but
this is not necessary.
- ESP8266 peers must send messages to this AP_IF interface (see below).
3. Configuring ESPNow clients to retry sending messages.
**Receiving messages from an ESP8266 device:** Strangely, an ESP32 device
connected to a wifi network using method 1 or 2 above, will receive ESPNow
messages sent to the STA_IF MAC address from another ESP32 device, but will
**reject** messages from an ESP8266 device!!!. To receive messages from an
ESP8266 device, the AP_IF interface must be set to ``active(True)`` **and**
messages must be sent to the AP_IF MAC address.
**Managing wifi channels:** Any other ESPNow devices wishing to communicate with
a device which is also connected to a Wifi Access Point MUST use the same
channel. A common scenario is where one ESPNow device is connected to a wifi
router and acts as a proxy for messages from a group of sensors connected via
ESPNow:
**Proxy:** ::
import network, time, espnow
sta, ap = wifi_reset() # Reset wifi to AP off, STA on and disconnected
sta.connect('myssid', 'mypassword')
while not sta.isconnected(): # Wait until connected...
time.sleep(0.1)
sta.config(pm=sta.PM_NONE) # ..then disable power saving
# Print the wifi channel used AFTER finished connecting to access point
print("Proxy running on channel:", sta.config("channel"))
e = espnow.ESPNow(); e.active(True)
for peer, msg in e:
# Receive espnow messages and forward them to MQTT broker over wifi
**Sensor:** ::
import network, espnow
sta, ap = wifi_reset() # Reset wifi to AP off, STA on and disconnected
sta.config(channel=6) # Change to the channel used by the proxy above.
peer = b'0\xaa\xaa\xaa\xaa\xaa' # MAC address of proxy
e = espnow.ESPNow(); e.active(True);
e.add_peer(peer)
while True:
msg = read_sensor()
e.send(peer, msg)
time.sleep(1)
Other issues to take care with when using ESPNow with wifi are:
- **Set WIFI to known state on startup:** MicroPython does not reset the wifi
peripheral after a soft reset. This can lead to unexpected behaviour. To
guarantee the wifi is reset to a known state after a soft reset make sure you
deactivate the STA_IF and AP_IF before setting them to the desired state at
startup, eg.::
import network, time
def wifi_reset(): # Reset wifi to AP_IF off, STA_IF on and disconnected
sta = network.WLAN(network.STA_IF); sta.active(False)
ap = network.WLAN(network.AP_IF); ap.active(False)
sta.active(True)
while not sta.active():
time.sleep(0.1)
sta.disconnect() # For ESP8266
while sta.isconnected():
time.sleep(0.1)
return sta, ap
sta, ap = wifi_reset()
Remember that a soft reset occurs every time you connect to the device REPL
and when you type ``ctrl-D``.
- **STA_IF and AP_IF always operate on the same channel:** the AP_IF will change
channel when you connect to a wifi network; regardless of the channel you set
for the AP_IF (see `Attention Note 3
<https://docs.espressif.com/
projects/esp-idf/en/latest/esp32/api-reference/network/esp_wifi.html
#_CPPv419esp_wifi_set_config16wifi_interface_tP13wifi_config_t>`_
). After all, there is really only one wifi radio on the device, which is
shared by the STA_IF and AP_IF virtual devices.
- **Disable automatic channel assignment on your wifi router:** If the wifi
router for your wifi network is configured to automatically assign the wifi
channel, it may change the channel for the network if it detects interference
from other wifi routers. When this occurs, the ESP devices connected to the
wifi network will also change channels to match the router, but other
ESPNow-only devices will remain on the previous channel and communication will
be lost. To mitigate this, either set your wifi router to use a fixed wifi
channel or configure your devices to re-scan the wifi channels if they are
unable to find their expected peers on the current channel.
- **MicroPython re-scans wifi channels when trying to reconnect:** If the esp
device is connected to a Wifi Access Point that goes down, MicroPython will
automatically start scanning channels in an attempt to reconnect to the
Access Point. This means ESPNow messages will be lost while scanning for the
AP. This can be disabled by ``sta.config(reconnects=0)``, which will also
disable the automatic reconnection after losing connection.
- Some versions of the ESP IDF only permit sending ESPNow packets from the
STA_IF interface to peers which have been registered on the same wifi
channel as the STA_IF::
ESPNOW: Peer channel is not equal to the home channel, send fail!
ESPNow and Sleep Modes
----------------------
The `machine.lightsleep([time_ms])<machine.lightsleep>` and
`machine.deepsleep([time_ms])<machine.deepsleep>` functions can be used to put
the ESP32 and peripherals (including the WiFi and Bluetooth radios) to sleep.
This is useful in many applications to conserve battery power. However,
applications must disable the WLAN peripheral (using
`active(False)<network.WLAN.active>`) before entering light or deep sleep (see
`Sleep Modes <https://docs.espressif.com/
projects/esp-idf/en/latest/esp32/api-reference/system/sleep_modes.html>`_).
Otherwise the WiFi radio may not be initialised properly after wake from
sleep. If the ``STA_IF`` and ``AP_IF`` interfaces have both been set
`active(True)<network.WLAN.active()>` then both interfaces should be set
`active(False)<network.WLAN.active()>` before entering any sleep mode.
**Example:** deep sleep::
import network, machine, espnow
sta, ap = wifi_reset() # Reset wifi to AP off, STA on and disconnected
peer = b'0\xaa\xaa\xaa\xaa\xaa' # MAC address of peer
e = espnow.ESPNow()
e.active(True)
e.add_peer(peer) # Register peer on STA_IF
print('Sending ping...')
if not e.send(peer, b'ping'):
print('Ping failed!')
e.active(False)
sta.active(False) # Disable the wifi before sleep
print('Going to sleep...')
machine.deepsleep(10000) # Sleep for 10 seconds then reboot
**Example:** light sleep::
import network, machine, espnow
sta, ap = wifi_reset() # Reset wifi to AP off, STA on and disconnected
sta.config(channel=6)
peer = b'0\xaa\xaa\xaa\xaa\xaa' # MAC address of peer
e = espnow.ESPNow()
e.active(True)
e.add_peer(peer) # Register peer on STA_IF
while True:
print('Sending ping...')
if not e.send(peer, b'ping'):
print('Ping failed!')
sta.active(False) # Disable the wifi before sleep
print('Going to sleep...')
machine.lightsleep(10000) # Sleep for 10 seconds
sta.active(True)
sta.config(channel=6) # Wifi loses config after lightsleep()

View File

@ -24,7 +24,7 @@ Functions
.. function:: mem_alloc() .. function:: mem_alloc()
Return the number of bytes of heap RAM that are allocated. Return the number of bytes of heap RAM that are allocated by Python code.
.. admonition:: Difference to CPython .. admonition:: Difference to CPython
:class: attention :class: attention
@ -33,8 +33,8 @@ Functions
.. function:: mem_free() .. function:: mem_free()
Return the number of bytes of available heap RAM, or -1 if this amount Return the number of bytes of heap RAM that is available for Python
is not known. code to allocate, or -1 if this amount is not known.
.. admonition:: Difference to CPython .. admonition:: Difference to CPython
:class: attention :class: attention

106
docs/library/gzip.rst Normal file
View File

@ -0,0 +1,106 @@
:mod:`gzip` -- gzip compression & decompression
===============================================
.. module:: gzip
:synopsis: gzip compression & decompression
|see_cpython_module| :mod:`python:gzip`.
This module allows compression and decompression of binary data with the
`DEFLATE algorithm <https://en.wikipedia.org/wiki/DEFLATE>`_ used by the gzip
file format.
.. note:: Prefer to use :class:`deflate.DeflateIO` instead of the functions in this
module as it provides a streaming interface to compression and decompression
which is convenient and more memory efficient when working with reading or
writing compressed data to a file, socket, or stream.
**Availability:**
* This module is **not present by default** in official MicroPython firmware
releases as it duplicates functionality available in the :mod:`deflate
<deflate>` module.
* A copy of this module can be installed (or frozen)
from :term:`micropython-lib` (`source <https://github.com/micropython/micropython-lib/blob/master/python-stdlib/gzip/gzip.py>`_).
See :ref:`packages` for more information. This documentation describes that module.
* Compression support will only be available if compression support is enabled
in the built-in :mod:`deflate <deflate>` module.
Functions
---------
.. function:: open(filename, mode, /)
Wrapper around built-in :func:`open` returning a GzipFile instance.
.. function:: decompress(data, /)
Decompresses *data* into a bytes object.
.. function:: compress(data, /)
Compresses *data* into a bytes object.
Classes
-------
.. class:: GzipFile(*, fileobj, mode)
This class can be used to wrap a *fileobj* which is any
:term:`stream-like <stream>` object such as a file, socket, or stream
(including :class:`io.BytesIO`). It is itself a stream and implements the
standard read/readinto/write/close methods.
When the *mode* argument is ``"rb"``, reads from the GzipFile instance will
decompress the data in the underlying stream and return decompressed data.
If compression support is enabled then the *mode* argument can be set to
``"wb"``, and writes to the GzipFile instance will be compressed and written
to the underlying stream.
By default the GzipFile class will read and write data using the gzip file
format, including a header and footer with checksum and a window size of 512
bytes.
The **file**, **compresslevel**, and **mtime** arguments are not
supported. **fileobj** and **mode** must always be specified as keyword
arguments.
Examples
--------
A typical use case for :class:`gzip.GzipFile` is to read or write a compressed
file from storage:
.. code:: python
import gzip
# Reading:
with open("data.gz", "rb") as f:
with gzip.GzipFile(fileobj=f, mode="rb") as g:
# Use g.read(), g.readinto(), etc.
# Same, but using gzip.open:
with gzip.open("data.gz", "rb") as f:
# Use f.read(), f.readinto(), etc.
# Writing:
with open("data.gz", "wb") as f:
with gzip.GzipFile(fileobj=f, mode="wb") as g:
# Use g.write(...) etc
# Same, but using gzip.open:
with gzip.open("data.gz", "wb") as f:
# Use f.write(...) etc
# Write a dictionary as JSON in gzip format, with a
# small (64 byte) window size.
config = { ... }
with gzip.open("config.gz", "wb") as f:
json.dump(config, f)
For guidance on working with gzip sources and choosing the window size see the
note at the :ref:`end of the deflate documentation <deflate_wbits>`.

View File

@ -8,15 +8,17 @@ MicroPython libraries
Important summary of this section Important summary of this section
* MicroPython provides built-in modules that mirror the functionality of the * MicroPython provides built-in modules that mirror the functionality of the
Python standard library (e.g. :mod:`os`, :mod:`time`), as well as :ref:`Python standard library <micropython_lib_python>` (e.g. :mod:`os`,
MicroPython-specific modules (e.g. :mod:`bluetooth`, :mod:`machine`). :mod:`time`), as well as :ref:`MicroPython-specific modules <micropython_lib_micropython>`
* Most standard library modules implement a subset of the functionality of (e.g. :mod:`bluetooth`, :mod:`machine`).
the equivalent Python module, and in a few cases provide some * Most Python standard library modules implement a subset of the
MicroPython-specific extensions (e.g. :mod:`array`, :mod:`os`) functionality of the equivalent Python module, and in a few cases provide
some MicroPython-specific extensions (e.g. :mod:`array`, :mod:`os`)
* Due to resource constraints or other limitations, some ports or firmware * Due to resource constraints or other limitations, some ports or firmware
versions may not include all the functionality documented here. versions may not include all the functionality documented here.
* To allow for extensibility, the built-in modules can be extended from * To allow for extensibility, some built-in modules can be
Python code loaded onto the device. :ref:`extended from Python code <micropython_lib_extending>` loaded onto
the device filesystem.
This chapter describes modules (function and class libraries) which are built This chapter describes modules (function and class libraries) which are built
into MicroPython. This documentation in general aspires to describe all modules into MicroPython. This documentation in general aspires to describe all modules
@ -41,6 +43,8 @@ Beyond the built-in libraries described in this documentation, many more
modules from the Python standard library, as well as further MicroPython modules from the Python standard library, as well as further MicroPython
extensions to it, can be found in :term:`micropython-lib`. extensions to it, can be found in :term:`micropython-lib`.
.. _micropython_lib_python:
Python standard libraries and micro-libraries Python standard libraries and micro-libraries
--------------------------------------------- ---------------------------------------------
@ -53,18 +57,21 @@ library.
:maxdepth: 1 :maxdepth: 1
array.rst array.rst
asyncio.rst
binascii.rst binascii.rst
builtins.rst builtins.rst
cmath.rst cmath.rst
collections.rst collections.rst
errno.rst errno.rst
gc.rst gc.rst
gzip.rst
hashlib.rst hashlib.rst
heapq.rst heapq.rst
io.rst io.rst
json.rst json.rst
math.rst math.rst
os.rst os.rst
platform.rst
random.rst random.rst
re.rst re.rst
select.rst select.rst
@ -73,10 +80,10 @@ library.
struct.rst struct.rst
sys.rst sys.rst
time.rst time.rst
uasyncio.rst
zlib.rst zlib.rst
_thread.rst _thread.rst
.. _micropython_lib_micropython:
MicroPython-specific libraries MicroPython-specific libraries
------------------------------ ------------------------------
@ -90,12 +97,15 @@ the following libraries.
bluetooth.rst bluetooth.rst
btree.rst btree.rst
cryptolib.rst cryptolib.rst
deflate.rst
framebuf.rst framebuf.rst
machine.rst machine.rst
micropython.rst micropython.rst
neopixel.rst neopixel.rst
network.rst network.rst
openamp.rst
uctypes.rst uctypes.rst
vfs.rst
The following libraries provide drivers for hardware components. The following libraries provide drivers for hardware components.
@ -155,6 +165,11 @@ The following libraries are specific to the ESP8266 and ESP32.
esp.rst esp.rst
esp32.rst esp32.rst
.. toctree::
:maxdepth: 1
espnow.rst
Libraries specific to the RP2040 Libraries specific to the RP2040
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -176,23 +191,60 @@ The following libraries are specific to the Zephyr port.
zephyr.rst zephyr.rst
.. _micropython_lib_extending:
Extending built-in libraries from Python Extending built-in libraries from Python
---------------------------------------- ----------------------------------------
In most cases, the above modules are actually named ``umodule`` rather than A subset of the built-in modules are able to be extended by Python code by
``module``, but MicroPython will alias any module prefixed with a ``u`` to the providing a module of the same name in the filesystem. This extensibility
non-``u`` version. However a file (or :term:`frozen module`) named applies to the following Python standard library modules which are built-in to
``module.py`` will take precedence over this alias. the firmware: ``array``, ``binascii``, ``collections``, ``errno``, ``gzip``,
``hashlib``, ``heapq``, ``io``, ``json``, ``os``, ``platform``, ``random``,
``re``, ``select``, ``socket``, ``ssl``, ``struct``, ``time`` ``zlib``, as well
as the MicroPython-specific ``machine`` module. All other built-in modules
cannot be extended from the filesystem.
This allows the user to provide an extended implementation of a built-in library This allows the user to provide an extended implementation of a built-in library
(perhaps to provide additional CPython compatibility). The user-provided module (perhaps to provide additional CPython compatibility or missing functionality).
(in ``module.py``) can still use the built-in functionality by importing This is used extensively in :term:`micropython-lib`, see :ref:`packages` for
``umodule`` directly. This is used extensively in :term:`micropython-lib`. See more information. The filesystem module will typically do a wildcard import of
:ref:`packages` for more information. the built-in module in order to inherit all the globals (classes, functions and
variables) from the built-in.
This applies to both the Python standard libraries (e.g. ``os``, ``time``, etc), In MicroPython v1.21.0 and higher, to prevent the filesystem module from
but also the MicroPython libraries too (e.g. ``machine``, ``bluetooth``, etc). importing itself, it can force an import of the built-in module it by
The main exception is the port-specific libraries (``pyb``, ``esp``, etc). temporarily clearing ``sys.path`` during the import. For example, to extend the
``time`` module from Python, a file named ``time.py`` on the filesystem would
do the following::
*Other than when you specifically want to force the use of the built-in module, _path = sys.path
we recommend always using* ``import module`` *rather than* ``import umodule``. sys.path = ()
try:
from time import *
finally:
sys.path = _path
del _path
def extra_method():
pass
The result is that ``time.py`` contains all the globals of the built-in ``time``
module, but adds ``extra_method``.
In earlier versions of MicroPython, you can force an import of a built-in module
by appending a ``u`` to the start of its name. For example, ``import utime``
instead of ``import time``. For example, ``time.py`` on the filesystem could
look like::
from utime import *
def extra_method():
pass
This way is still supported, but the ``sys.path`` method described above is now
preferred as the ``u``-prefix will be removed from the names of built-in
modules in a future version of MicroPython.
*Other than when it specifically needs to force the use of the built-in module,
code should always use* ``import module`` *rather than* ``import umodule``.

View File

@ -86,16 +86,6 @@ Functions
Classes Classes
------- -------
.. class:: FileIO(...)
This is type of a file open in binary mode, e.g. using ``open(name, "rb")``.
You should not instantiate this class directly.
.. class:: TextIOWrapper(...)
This is type of a file open in text mode, e.g. using ``open(name, "rt")``.
You should not instantiate this class directly.
.. class:: StringIO([string]) .. class:: StringIO([string])
.. class:: BytesIO([string]) .. class:: BytesIO([string])

View File

@ -4,7 +4,7 @@
class ADC -- analog to digital conversion class ADC -- analog to digital conversion
========================================= =========================================
The ADC class provides an interface to analog-to-digital convertors, and The ADC class provides an interface to analog-to-digital converters, and
represents a single endpoint that can sample a continuous voltage and represents a single endpoint that can sample a continuous voltage and
convert it to a discretised value. convert it to a discretised value.

View File

@ -39,9 +39,9 @@ Methods
Configure the ADC peripheral. *bits* will set the resolution of the Configure the ADC peripheral. *bits* will set the resolution of the
conversion process. conversion process.
.. method:: ADCBlock.connect(channel) .. method:: ADCBlock.connect(channel, *, ...)
ADCBlock.connect(source) ADCBlock.connect(source, *, ...)
ADCBlock.connect(channel, source) ADCBlock.connect(channel, source, *, ...)
Connect up a channel on the ADC peripheral so it is ready for sampling, Connect up a channel on the ADC peripheral so it is ready for sampling,
and return an :ref:`ADC <machine.ADC>` object that represents that connection. and return an :ref:`ADC <machine.ADC>` object that represents that connection.
@ -56,3 +56,6 @@ Methods
If both *channel* and *source* are given then they are connected together If both *channel* and *source* are given then they are connected together
and made ready for sampling. and made ready for sampling.
Any additional keyword arguments are used to configure the returned ADC object,
via its :meth:`init <machine.ADC.init>` method.

View File

@ -94,7 +94,7 @@ General Methods
- *freq* is the SCL clock rate - *freq* is the SCL clock rate
In the case of hardware I2C the actual clock frequency may be lower than the In the case of hardware I2C the actual clock frequency may be lower than the
requested frequency. This is dependant on the platform hardware. The actual requested frequency. This is dependent on the platform hardware. The actual
rate may be determined by printing the I2C object. rate may be determined by printing the I2C object.
.. method:: I2C.deinit() .. method:: I2C.deinit()

View File

@ -47,7 +47,7 @@ I2S objects can be created and initialized using::
3 modes of operation are supported: 3 modes of operation are supported:
- blocking - blocking
- non-blocking - non-blocking
- uasyncio - asyncio
blocking:: blocking::
@ -63,13 +63,13 @@ non-blocking::
audio_in.irq(i2s_callback) # i2s_callback is called when buf is filled audio_in.irq(i2s_callback) # i2s_callback is called when buf is filled
num_read = audio_in.readinto(buf) # returns immediately num_read = audio_in.readinto(buf) # returns immediately
uasyncio:: asyncio::
swriter = uasyncio.StreamWriter(audio_out) swriter = asyncio.StreamWriter(audio_out)
swriter.write(buf) swriter.write(buf)
await swriter.drain() await swriter.drain()
sreader = uasyncio.StreamReader(audio_in) sreader = asyncio.StreamReader(audio_in)
num_read = await sreader.readinto(buf) num_read = await sreader.readinto(buf)
Some codec devices like the WM8960 or SGTL5000 require separate initialization Some codec devices like the WM8960 or SGTL5000 require separate initialization
@ -103,7 +103,7 @@ Constructor
- ``ibuf`` specifies internal buffer length (bytes) - ``ibuf`` specifies internal buffer length (bytes)
For all ports, DMA runs continuously in the background and allows user applications to perform other operations while For all ports, DMA runs continuously in the background and allows user applications to perform other operations while
sample data is transfered between the internal buffer and the I2S peripheral unit. sample data is transferred between the internal buffer and the I2S peripheral unit.
Increasing the size of the internal buffer has the potential to increase the time that user applications can perform non-I2S operations Increasing the size of the internal buffer has the potential to increase the time that user applications can perform non-I2S operations
before underflow (e.g. ``write`` method) or overflow (e.g. ``readinto`` method). before underflow (e.g. ``write`` method) or overflow (e.g. ``readinto`` method).

View File

@ -10,7 +10,8 @@ Example usage::
from machine import PWM from machine import PWM
pwm = PWM(pin) # create a PWM object on a pin pwm = PWM(pin, freq=50, duty_u16=8192) # create a PWM object on a pin
# and set freq and duty
pwm.duty_u16(32768) # set duty to 50% pwm.duty_u16(32768) # set duty to 50%
# reinitialise with a period of 200us, duty of 5us # reinitialise with a period of 200us, duty of 5us
@ -23,7 +24,7 @@ Example usage::
Constructors Constructors
------------ ------------
.. class:: PWM(dest, *, freq, duty_u16, duty_ns) .. class:: PWM(dest, *, freq, duty_u16, duty_ns, invert)
Construct and return a new PWM object using the following parameters: Construct and return a new PWM object using the following parameters:
@ -34,10 +35,12 @@ Constructors
PWM cycle. PWM cycle.
- *duty_u16* sets the duty cycle as a ratio ``duty_u16 / 65535``. - *duty_u16* sets the duty cycle as a ratio ``duty_u16 / 65535``.
- *duty_ns* sets the pulse width in nanoseconds. - *duty_ns* sets the pulse width in nanoseconds.
- *invert* inverts the respective output if the value is True
Setting *freq* may affect other PWM objects if the objects share the same Setting *freq* may affect other PWM objects if the objects share the same
underlying PWM generator (this is hardware specific). underlying PWM generator (this is hardware specific).
Only one of *duty_u16* and *duty_ns* should be specified at a time. Only one of *duty_u16* and *duty_ns* should be specified at a time.
*invert* is not available at all ports.
Methods Methods
------- -------

View File

@ -238,6 +238,12 @@ The following methods are not part of the core Pin API and only implemented on c
Availability: cc3200 port. Availability: cc3200 port.
.. method:: Pin.toggle()
Toggle output pin from "0" to "1" or vice-versa.
Availability: mimxrt, samd, rp2 ports.
Constants Constants
--------- ---------

View File

@ -75,6 +75,21 @@ Methods
- ``wake`` specifies the sleep mode from where this interrupt can wake - ``wake`` specifies the sleep mode from where this interrupt can wake
up the system. up the system.
.. method:: RTC.memory([data])
``RTC.memory(data)`` will write *data* to the RTC memory, where *data* is any
object which supports the buffer protocol (including `bytes`, `bytearray`,
`memoryview` and `array.array`). ``RTC.memory()`` reads RTC memory and returns
a `bytes` object.
Data written to RTC user memory is persistent across restarts, including
`machine.soft_reset()` and `machine.deepsleep()`.
The maximum length of RTC user memory is 2048 bytes by default on esp32,
and 492 bytes on esp8266.
Availability: esp32, esp8266 ports.
Constants Constants
--------- ---------

View File

@ -20,11 +20,11 @@ more info regarding the pins which can be remapped to be used with a SD card.
Example usage:: Example usage::
from machine import SD from machine import SD
import os import vfs
# clk cmd and dat0 pins must be passed along with # clk cmd and dat0 pins must be passed along with
# their respective alternate functions # their respective alternate functions
sd = machine.SD(pins=('GP10', 'GP11', 'GP15')) sd = machine.SD(pins=('GP10', 'GP11', 'GP15'))
os.mount(sd, '/sd') vfs.mount(sd, '/sd')
# do normal file operations # do normal file operations
Constructors Constructors

View File

@ -27,10 +27,10 @@ vary from platform to platform.
This class provides access to SD or MMC storage cards using either This class provides access to SD or MMC storage cards using either
a dedicated SD/MMC interface hardware or through an SPI channel. a dedicated SD/MMC interface hardware or through an SPI channel.
The class implements the block protocol defined by :class:`os.AbstractBlockDev`. The class implements the block protocol defined by :class:`vfs.AbstractBlockDev`.
This allows the mounting of an SD card to be as simple as:: This allows the mounting of an SD card to be as simple as::
os.mount(machine.SDCard(), "/sd") vfs.mount(machine.SDCard(), "/sd")
The constructor takes the following parameters: The constructor takes the following parameters:

View File

@ -98,7 +98,7 @@ Methods
specify them as a tuple of ``pins`` parameter. specify them as a tuple of ``pins`` parameter.
In the case of hardware SPI the actual clock frequency may be lower than the In the case of hardware SPI the actual clock frequency may be lower than the
requested baudrate. This is dependant on the platform hardware. The actual requested baudrate. This is dependent on the platform hardware. The actual
rate may be determined by printing the SPI object. rate may be determined by printing the SPI object.
.. method:: SPI.deinit() .. method:: SPI.deinit()

View File

@ -73,7 +73,7 @@ Methods
- ``callback`` - The callable to call upon expiration of the timer period. - ``callback`` - The callable to call upon expiration of the timer period.
The callback must take one argument, which is passed the Timer object. The callback must take one argument, which is passed the Timer object.
The ``callback`` argument shall be specified. Otherwise an exception The ``callback`` argument shall be specified. Otherwise an exception
will occurr upon timer expiration: will occur upon timer expiration:
``TypeError: 'NoneType' object isn't callable`` ``TypeError: 'NoneType' object isn't callable``
.. method:: Timer.deinit() .. method:: Timer.deinit()

View File

@ -152,31 +152,6 @@ Methods
Send a break condition on the bus. This drives the bus low for a duration Send a break condition on the bus. This drives the bus low for a duration
longer than required for a normal transmission of a character. longer than required for a normal transmission of a character.
.. method:: UART.irq(trigger, priority=1, handler=None, wake=machine.IDLE)
Create a callback to be triggered when data is received on the UART.
- *trigger* can only be ``UART.RX_ANY``
- *priority* level of the interrupt. Can take values in the range 1-7.
Higher values represent higher priorities.
- *handler* an optional function to be called when new characters arrive.
- *wake* can only be ``machine.IDLE``.
.. note::
The handler will be called whenever any of the following two conditions are met:
- 8 new characters have been received.
- At least 1 new character is waiting in the Rx buffer and the Rx line has been
silent for the duration of 1 complete frame.
This means that when the handler function is called there will be between 1 to 8
characters waiting.
Returns an irq object.
Availability: WiPy.
.. method:: UART.flush() .. method:: UART.flush()
Waits until all data has been sent. In case of a timeout, an exception is raised. The timeout Waits until all data has been sent. In case of a timeout, an exception is raised. The timeout
@ -185,7 +160,7 @@ Methods
.. note:: .. note::
For the rp2, esp8266 and nrf ports the call returns while the last byte is sent. For the esp8266 and nrf ports the call returns while the last byte is sent.
If required, a one character wait time has to be added in the calling script. If required, a one character wait time has to be added in the calling script.
Availability: rp2, esp32, esp8266, mimxrt, cc3200, stm32, nrf ports, renesas-ra Availability: rp2, esp32, esp8266, mimxrt, cc3200, stm32, nrf ports, renesas-ra
@ -197,17 +172,91 @@ Methods
.. note:: .. note::
For the rp2, esp8266 and nrf ports the call may return ``True`` even if the last byte For the esp8266 and nrf ports the call may return ``True`` even if the last byte
of a transfer is still being sent. If required, a one character wait time has to be of a transfer is still being sent. If required, a one character wait time has to be
added in the calling script. added in the calling script.
Availability: rp2, esp32, esp8266, mimxrt, cc3200, stm32, nrf ports, renesas-ra Availability: rp2, esp32, esp8266, mimxrt, cc3200, stm32, nrf ports, renesas-ra
.. method:: UART.irq(handler=None, trigger=0, hard=False)
Configure an interrupt handler to be called when a UART event occurs.
The arguments are:
- *handler* is an optional function to be called when the interrupt event
triggers. The handler must take exactly one argument which is the
``UART`` instance.
- *trigger* configures the event(s) which can generate an interrupt.
Possible values are a mask of one or more of the following:
- ``UART.IRQ_RXIDLE`` interrupt after receiving at least one character
and then the RX line goes idle.
- ``UART.IRQ_RX`` interrupt after each received character.
- ``UART.IRQ_TXIDLE`` interrupt after or while the last character(s) of
a message are or have been sent.
- ``UART.IRQ_BREAK`` interrupt when a break state is detected at RX
- *hard* if true a hardware interrupt is used. This reduces the delay
between the pin change and the handler being called. Hard interrupt
handlers may not allocate memory; see :ref:`isr_rules`.
Returns an irq object.
Due to limitations of the hardware not all trigger events are available on all ports.
.. table:: Availability of triggers
:align: center
============== ========== ====== ========== =========
Port / Trigger IRQ_RXIDLE IRQ_RX IRQ_TXIDLE IRQ_BREAK
============== ========== ====== ========== =========
CC3200 yes
ESP32 yes yes yes
MIMXRT yes yes
NRF yes yes
RENESAS-RA yes yes
RP2 yes yes yes
SAMD yes yes yes
STM32 yes yes
============== ========== ====== ========== =========
.. note::
- The ESP32 port does not support the option hard=True.
- The rp2 port's UART.IRQ_TXIDLE is only triggered when the message
is longer than 5 characters and the trigger happens when still 5 characters
are to be sent.
- The rp2 port's UART.IRQ_BREAK needs receiving valid characters for triggering
again.
- The SAMD port's UART.IRQ_TXIDLE is triggered while the last character is sent.
- On STM32F4xx MCU's, using the trigger UART.IRQ_RXIDLE the handler will be called once
after the first character and then after the end of the message, when the line is
idle.
Availability: cc3200, esp32, mimxrt, nrf, renesas-ra, rp2, samd, stm32.
Constants Constants
--------- ---------
.. data:: UART.RX_ANY .. data:: UART.RTS
UART.CTS
IRQ trigger sources Flow control options.
Availability: WiPy. Availability: esp32, mimxrt, renesas-ra, rp2, stm32.
.. data:: UART.IRQ_RXIDLE
UART.IRQ_RX
UART.IRQ_TXIDLE
UART.IRQ_BREAK
IRQ trigger sources.
Availability: renesas-ra, stm32, esp32, rp2040, mimxrt, samd, cc3200.

View File

@ -0,0 +1,303 @@
.. currentmodule:: machine
.. _machine.USBDevice:
class USBDevice -- USB Device driver
====================================
.. note:: ``machine.USBDevice`` is currently only supported on the rp2 and samd
ports.
USBDevice provides a low-level Python API for implementing USB device functions using
Python code.
.. warning:: This low-level API assumes familiarity with the USB standard. There
are high-level `usb driver modules in micropython-lib`_ which provide a
simpler interface and more built-in functionality.
Terminology
-----------
- A "Runtime" USB device interface or driver is one which is defined using this
Python API after MicroPython initially starts up.
- A "Built-in" USB device interface or driver is one that is compiled into the
MicroPython firmware, and is always available. Examples are USB-CDC (serial
port) which is usually enabled by default. Built-in USB-MSC (Mass Storage) is an
option on some ports.
Lifecycle
---------
Managing a runtime USB interface can be tricky, especially if you are communicating
with MicroPython over a built-in USB-CDC serial port that's part of the same USB
device.
- A MicroPython soft reset will always clear all runtime USB interfaces, which
results in the entire USB device disconnecting from the host. If MicroPython
is also providing a built-in USB-CDC serial port then this will re-appear
after the soft reset.
This means some functions (like ``mpremote run``) that target the USB-CDC
serial port will immediately fail if a runtime USB interface is active,
because the port goes away when ``mpremote`` triggers a soft reset. The
operation should succeed on the second try, as after the soft reset there is
no more runtime USB interface.
- To configure a runtime USB device on every boot, it's recommended to place the
configuration code in the ``boot.py`` file on the :ref:`device VFS
<filesystem>`. On each reset this file is executed before the USB subsystem is
initialised (and before ``main.py``), so it allows the board to come up with the runtime
USB device immediately.
- For development or debugging, it may be convenient to connect a hardware
serial REPL and disable the built-in USB-CDC serial port entirely. Not all ports
support this (currently only ``rp2``). The custom build should be configured
with ``#define MICROPY_HW_USB_CDC (0)`` and ``#define
MICROPY_HW_ENABLE_UART_REPL (1)``.
Constructors
------------
.. class:: USBDevice()
Construct a USBDevice object.
.. note:: This object is a singleton, each call to this constructor
returns the same object reference.
Methods
-------
.. method:: USBDevice.config(desc_dev, desc_cfg, desc_strs=None, open_itf_cb=None, reset_cb=None, control_xfer_cb=None, xfer_cb=None)
Configures the ``USBDevice`` singleton object with the USB runtime device
state and callback functions:
- ``desc_dev`` - A bytes-like object containing
the new USB device descriptor.
- ``desc_cfg`` - A bytes-like object containing the
new USB configuration descriptor.
- ``desc_strs`` - Optional object holding strings or bytes objects
containing USB string descriptor values. Can be a list, a dict, or any
object which supports subscript indexing with integer keys (USB string
descriptor index).
Strings are an optional USB feature, and this parameter can be unset
(default) if no strings are referenced in the device and configuration
descriptors, or if only built-in strings should be used.
Apart from index 0, all the string values should be plain ASCII. Index 0
is the special "languages" USB descriptor, represented as a bytes object
with a custom format defined in the USB standard. ``None`` can be
returned at index 0 in order to use a default "English" language
descriptor.
To fall back to providing a built-in string value for a given index, a
subscript lookup can return ``None``, raise ``KeyError``, or raise
``IndexError``.
- ``open_itf_cb`` - This callback is called once for each interface
or Interface Association Descriptor in response to a Set
Configuration request from the USB Host (the final stage before
the USB device is available to the host).
The callback takes a single argument, which is a memoryview of the
interface or IAD descriptor that the host is accepting (including
all associated descriptors). It is a view into the same
``desc_cfg`` object that was provided as a separate
argument to this function. The memoryview is only valid until the
callback function returns.
- ``reset_cb`` - This callback is called when the USB host performs
a bus reset. The callback takes no arguments. Any in-progress
transfers will never complete. The USB host will most likely
proceed to re-enumerate the USB device by calling the descriptor
callbacks and then ``open_itf_cb()``.
- ``control_xfer_cb`` - This callback is called one or more times
for each USB control transfer (device Endpoint 0). It takes two
arguments.
The first argument is the control transfer stage. It is one of:
- ``1`` for SETUP stage.
- ``2`` for DATA stage.
- ``3`` for ACK stage.
Second argument is a memoryview to read the USB control request
data for this stage. The memoryview is only valid until the
callback function returns. Data in this memoryview will be the same
across each of the three stages of a single transfer.
A successful transfer consists of this callback being called in sequence
for the three stages. Generally speaking, if a device wants to do
something in response to a control request then it's best to wait until
the ACK stage to confirm the host controller completed the transfer as
expected.
The callback should return one of the following values:
- ``False`` to stall the endpoint and reject the transfer. It won't
proceed to any remaining stages.
- ``True`` to continue the transfer to the next stage.
- A buffer object can be returned at the SETUP stage when the transfer
will send or receive additional data. Typically this is the case when
the ``wLength`` field in the request has a non-zero value. This should
be a writable buffer for an ``OUT`` direction transfer, or a readable
buffer with data for an ``IN`` direction transfer.
- ``xfer_cb`` - This callback is called whenever a non-control
transfer submitted by calling :func:`USBDevice.submit_xfer` completes.
The callback has three arguments:
1. The Endpoint number for the completed transfer.
2. Result value: ``True`` if the transfer succeeded, ``False``
otherwise.
3. Number of bytes successfully transferred. In the case of a
"short" transfer, The result is ``True`` and ``xferred_bytes``
will be smaller than the length of the buffer submitted for the
transfer.
.. note:: If a bus reset occurs (see :func:`USBDevice.reset`),
``xfer_cb`` is not called for any transfers that have not
already completed.
.. method:: USBDevice.active(self, [value] /)
Returns the current active state of this runtime USB device as a
boolean. The runtime USB device is "active" when it is available to
interact with the host, it doesn't mean that a USB Host is actually
present.
If the optional ``value`` argument is set to a truthy value, then
the USB device will be activated.
If the optional ``value`` argument is set to a falsey value, then
the USB device is deactivated. While the USB device is deactivated,
it will not be detected by the USB Host.
To simulate a disconnect and a reconnect of the USB device, call
``active(False)`` followed by ``active(True)``. This may be
necessary if the runtime device configuration has changed, so that
the host sees the new device.
.. attribute:: USBDevice.builtin_driver
This attribute holds the current built-in driver configuration, and must be
set to one of the ``USBDevice.BUILTIN_`` named constants defined on this object.
By default it holds the value :data:`USBDevice.BUILTIN_NONE`.
Runtime USB device must be inactive when setting this field. Call the
:func:`USBDevice.active` function to deactivate before setting if necessary
(and again to activate after setting).
If this value is set to any value other than :data:`USBDevice.BUILTIN_NONE` then
the following restrictions apply to the :func:`USBDevice.config` arguments:
- ``desc_cfg`` should begin with the built-in USB interface descriptor data
accessible via :data:`USBDevice.builtin_driver` attribute ``desc_cfg``.
Descriptors appended after the built-in configuration descriptors should use
interface, string and endpoint numbers starting from the max built-in values
defined in :data:`USBDevice.builtin_driver` attributes ``itf_max``, ``str_max`` and
``ep_max``.
- The ``bNumInterfaces`` field in the built-in configuration
descriptor will also need to be updated if any new interfaces
are appended to the end of ``desc_cfg``.
- ``desc_strs`` should either be ``None`` or a list/dictionary where index
values less than ``USBDevice.builtin_driver.str_max`` are missing or have
value ``None``. This reserves those string indexes for the built-in
drivers. Placing a different string at any of these indexes overrides that
string in the built-in driver.
.. method:: USBDevice.remote_wakeup(self)
Wake up host if we are in suspend mode and the REMOTE_WAKEUP feature
is enabled by the host. This has to be enabled in the USB attributes,
and on the host. Returns ``True`` if remote wakeup was enabled and
active and the host was woken up.
.. method:: USBDevice.submit_xfer(self, ep, buffer /)
Submit a USB transfer on endpoint number ``ep``. ``buffer`` must be
an object implementing the buffer interface, with read access for
``IN`` endpoints and write access for ``OUT`` endpoints.
.. note:: ``ep`` cannot be the control Endpoint number 0. Control
transfers are built up through successive executions of
``control_xfer_cb``, see above.
Returns ``True`` if successful, ``False`` if the transfer could not
be queued (as USB device is not configured by host, or because
another transfer is queued on this endpoint.)
When the USB host completes the transfer, the ``xfer_cb`` callback
is called (see above).
Raises ``OSError`` with reason ``MP_EINVAL`` If the USB device is not
active.
.. method:: USBDevice.stall(self, ep, [stall] /)
Calling this function gets or sets the STALL state of a device endpoint.
``ep`` is the number of the endpoint.
If the optional ``stall`` parameter is set, this is a boolean flag
for the STALL state.
The return value is the current stall state of the endpoint (before
any change made by this function).
An endpoint that is set to STALL may remain stalled until this
function is called again, or STALL may be cleared automatically by
the USB host.
Raises ``OSError`` with reason ``MP_EINVAL`` If the USB device is not
active.
Constants
---------
.. data:: USBDevice.BUILTIN_NONE
.. data:: USBDevice.BUILTIN_DEFAULT
.. data:: USBDevice.BUILTIN_CDC
.. data:: USBDevice.BUILTIN_MSC
.. data:: USBDevice.BUILTIN_CDC_MSC
These constant objects hold the built-in descriptor data which is
compiled into the MicroPython firmware. ``USBDevice.BUILTIN_NONE`` and
``USBDevice.BUILTIN_DEFAULT`` are always present. Additional objects may be present
depending on the firmware build configuration and the actual built-in drivers.
.. note:: Currently at most one of ``USBDevice.BUILTIN_CDC``,
``USBDevice.BUILTIN_MSC`` and ``USBDevice.BUILTIN_CDC_MSC`` is defined
and will be the same object as ``USBDevice.BUILTIN_DEFAULT``.
These constants are defined to allow run-time detection of
the built-in driver (if any). Support for selecting one of
multiple built-in driver configurations may be added in the
future.
These values are assigned to :data:`USBDevice.builtin_driver` to get/set the
built-in configuration.
Each object contains the following read-only fields:
- ``itf_max`` - One more than the highest bInterfaceNumber value used
in the built-in configuration descriptor.
- ``ep_max`` - One more than the highest bEndpointAddress value used
in the built-in configuration descriptor. Does not include any
``IN`` flag bit (0x80).
- ``str_max`` - One more than the highest string descriptor index
value used by any built-in descriptor.
- ``desc_dev`` - ``bytes`` object containing the built-in USB device
descriptor.
- ``desc_cfg`` - ``bytes`` object containing the complete built-in USB
configuration descriptor.
.. _usb driver modules in micropython-lib: https://github.com/micropython/micropython-lib/tree/master/micropython/usb#readme

View File

@ -25,9 +25,8 @@ Constructors
Create a WDT object and start it. The timeout must be given in milliseconds. Create a WDT object and start it. The timeout must be given in milliseconds.
Once it is running the timeout cannot be changed and the WDT cannot be stopped either. Once it is running the timeout cannot be changed and the WDT cannot be stopped either.
Notes: On the esp32 the minimum timeout is 1 second. On the esp8266 a timeout Notes: On the esp8266 a timeout cannot be specified, it is determined by the underlying system.
cannot be specified, it is determined by the underlying system. On rp2040 devices, On rp2040 devices, the maximum timeout is 8388 ms.
the maximum timeout is 8388 ms.
Methods Methods
------- -------

View File

@ -127,14 +127,20 @@ Power related functions
.. function:: idle() .. function:: idle()
Gates the clock to the CPU, useful to reduce power consumption at any time during Gates the clock to the CPU, useful to reduce power consumption at any time
short or long periods. Peripherals continue working and execution resumes as soon during short or long periods. Peripherals continue working and execution
as any interrupt is triggered (on many ports this includes system timer resumes as soon as any interrupt is triggered, or at most one millisecond
interrupt occurring at regular intervals on the order of millisecond). after the CPU was paused.
It is recommended to call this function inside any tight loop that is
continuously checking for an external change (i.e. polling). This will reduce
power consumption without significantly impacting performance. To reduce
power consumption further then see the :func:`lightsleep`,
:func:`time.sleep()` and :func:`time.sleep_ms()` functions.
.. function:: sleep() .. function:: sleep()
.. note:: This function is deprecated, use `lightsleep()` instead with no arguments. .. note:: This function is deprecated, use :func:`lightsleep()` instead with no arguments.
.. function:: lightsleep([time_ms]) .. function:: lightsleep([time_ms])
deepsleep([time_ms]) deepsleep([time_ms])
@ -265,3 +271,4 @@ Classes
machine.WDT.rst machine.WDT.rst
machine.SD.rst machine.SD.rst
machine.SDCard.rst machine.SDCard.rst
machine.USBDevice.rst

View File

@ -125,8 +125,11 @@ Functions
Return the natural logarithm of the gamma function of ``x``. Return the natural logarithm of the gamma function of ``x``.
.. function:: log(x) .. function:: log(x)
log(x, base)
Return the natural logarithm of ``x``. With one argument, return the natural logarithm of *x*.
With two arguments, return the logarithm of *x* to the given *base*.
.. function:: log10(x) .. function:: log10(x)

View File

@ -136,6 +136,14 @@ Functions
the heap may be locked) and scheduling a function to call later will lift the heap may be locked) and scheduling a function to call later will lift
those restrictions. those restrictions.
On multi-threaded ports, the scheduled function's behaviour depends on
whether the Global Interpreter Lock (GIL) is enabled for the specific port:
- If GIL is enabled, the function can preempt any thread and run in its
context.
- If GIL is disabled, the function will only preempt the main thread and run
in its context.
Note: If `schedule()` is called from a preempting IRQ, when memory Note: If `schedule()` is called from a preempting IRQ, when memory
allocation is not allowed and the callback to be passed to `schedule()` is allocation is not allowed and the callback to be passed to `schedule()` is
a bound method, passing this directly will fail. This is because creating a a bound method, passing this directly will fail. This is because creating a
@ -147,3 +155,71 @@ Functions
There is a finite queue to hold the scheduled functions and `schedule()` There is a finite queue to hold the scheduled functions and `schedule()`
will raise a `RuntimeError` if the queue is full. will raise a `RuntimeError` if the queue is full.
Classes
-------
.. class:: RingIO(size)
.. class:: RingIO(buffer)
:noindex:
Provides a fixed-size ringbuffer for bytes with a stream interface. Can be
considered like a fifo queue variant of `io.BytesIO`.
When created with integer size a suitable buffer will be allocated.
Alternatively a `bytearray` or similar buffer protocol object can be provided
to the constructor for in-place use.
The classic ringbuffer algorithm is used which allows for any size buffer
to be used however one byte will be consumed for tracking. If initialised
with an integer size this will be accounted for, for example ``RingIO(16)``
will allocate a 17 byte buffer internally so it can hold 16 bytes of data.
When passing in a pre-allocated buffer however one byte less than its
original length will be available for storage, eg. ``RingIO(bytearray(16))``
will only hold 15 bytes of data.
A RingIO instance can be IRQ / thread safe when used to pass data in a single
direction eg. when written to in an IRQ and read from in a non-IRQ function
(or vice versa). This does not hold if you try to eg. write to a single instance
from both IRQ and non-IRQ code, this would often cause data corruption.
.. method:: RingIO.any()
Returns an integer counting the number of characters that can be read.
.. method:: RingIO.read([nbytes])
Read available characters. This is a non-blocking function. If ``nbytes``
is specified then read at most that many bytes, otherwise read as much
data as possible.
Return value: a bytes object containing the bytes read. Will be
zero-length bytes object if no data is available.
.. method:: RingIO.readline([nbytes])
Read a line, ending in a newline character or return if one exists in
the buffer, else return available bytes in buffer. If ``nbytes`` is
specified then read at most that many bytes.
Return value: a bytes object containing the line read.
.. method:: RingIO.readinto(buf[, nbytes])
Read available bytes into the provided ``buf``. If ``nbytes`` is
specified then read at most that many bytes. Otherwise, read at
most ``len(buf)`` bytes.
Return value: Integer count of the number of bytes read into ``buf``.
.. method:: RingIO.write(buf)
Non-blocking write of bytes from ``buf`` into the ringbuffer, limited
by the available space in the ringbuffer.
Return value: Integer count of bytes written.
.. method:: RingIO.close()
No-op provided as part of standard `stream` interface. Has no effect
on data in the ringbuffer.

View File

@ -9,9 +9,7 @@ This module provides a driver for WS2818 / NeoPixel LEDs.
.. note:: This module is only included by default on the ESP8266, ESP32 and RP2 .. note:: This module is only included by default on the ESP8266, ESP32 and RP2
ports. On STM32 / Pyboard and others, you can either install the ports. On STM32 / Pyboard and others, you can either install the
``neopixel`` package using :term:`mip`, or you can download the module ``neopixel`` package using :term:`mip`, or you can download the module
directly from directly from :term:`micropython-lib` and copy it to the filesystem.
<https://raw.githubusercontent.com/micropython/micropython-lib/master/micropython/drivers/led/neopixel/neopixel.py>`_
and copy it to the filesystem.
class NeoPixel class NeoPixel
-------------- --------------
@ -45,7 +43,8 @@ Constructors
- *pin* is a machine.Pin instance. - *pin* is a machine.Pin instance.
- *n* is the number of LEDs in the strip. - *n* is the number of LEDs in the strip.
- *bpp* is 3 for RGB LEDs, and 4 for RGBW LEDs. - *bpp* is 3 for RGB LEDs, and 4 for RGBW LEDs.
- *timing* is 0 for 400KHz, and 1 for 800kHz LEDs (most are 800kHz). - *timing* is 0 for 400KHz, and 1 for 800kHz LEDs (most are 800kHz). You
may also supply a timing tuple as accepted by `machine.bitstream()`.
Pixel access methods Pixel access methods
-------------------- --------------------

View File

@ -10,7 +10,7 @@ Example usage::
import network import network
nic = network.LAN(0) nic = network.LAN(0)
print(nic.ifconfig()) print(nic.ipconfig("addr4"))
# now use socket as usual # now use socket as usual
... ...

View File

@ -0,0 +1,98 @@
.. currentmodule:: network
.. _network.PPP:
class PPP -- create network connections over serial PPP
=======================================================
This class allows you to create a network connection over a serial port using
the PPP protocol. It is only available on selected ports and boards.
Example usage::
import network
ppp = network.PPP(uart)
ppp.connect()
while not ppp.isconnected():
pass
print(ppp.ipconfig("addr4"))
# use the socket module as usual, etc
ppp.disconnect()
Constructors
------------
.. class:: PPP(stream)
Create a PPP driver object.
Arguments are:
- *stream* is any object that supports the stream protocol, but is most commonly a
:class:`machine.UART` instance. This stream object must have an ``irq()`` method
and an ``IRQ_RXIDLE`` constant, for use by `PPP.connect`.
Methods
-------
.. method:: PPP.connect(security=SEC_NONE, user=None, key=None)
Initiate a PPP connection with the given parameters:
- *security* is the type of security, either ``PPP.SEC_NONE``, ``PPP.SEC_PAP``,
or ``PPP.SEC_CHAP``.
- *user* is an optional user name to use with the security mode.
- *key* is an optional password to use with the security mode.
When this method is called the underlying stream has its interrupt configured to call
`PPP.poll` via ``stream.irq(ppp.poll, stream.IRQ_RXIDLE)``. This makes sure the
stream is polled, and data passed up the PPP stack, wheverver data becomes available
on the stream.
The connection proceeds asynchronously, in the background.
.. method:: PPP.disconnect()
Terminate the connection. This must be called to cleanly close the PPP connection.
.. method:: PPP.isconnected()
Returns ``True`` if the PPP link is connected and up.
Returns ``False`` otherwise.
.. method:: PPP.status()
Returns the PPP status.
.. method:: PPP.config(config_parameters)
Sets or gets parameters of the PPP interface. There are currently no parameter that
can be set or retrieved.
.. method:: PPP.ipconfig('param')
PPP.ipconfig(param=value, ...)
See `AbstractNIC.ipconfig`.
.. method:: PPP.ifconfig([(ip, subnet, gateway, dns)])
See `AbstractNIC.ifconfig`.
.. method:: PPP.poll()
Poll the underlying stream for data, and pass it up the PPP stack.
This is called automatically if the stream is a UART with a RXIDLE interrupt,
so it's not usually necessary to call it manually.
Constants
---------
.. data:: PPP.SEC_NONE
PPP.SEC_PAP
PPP.SEC_CHAP
The type of connection security.

View File

@ -13,7 +13,7 @@ Example usage::
import network import network
nic = network.WIZNET5K(pyb.SPI(1), pyb.Pin.board.X5, pyb.Pin.board.X4) nic = network.WIZNET5K(pyb.SPI(1), pyb.Pin.board.X5, pyb.Pin.board.X4)
print(nic.ifconfig()) print(nic.ipconfig("addr4"))
# now use socket as usual # now use socket as usual
... ...
@ -51,20 +51,7 @@ Constructors
Methods Methods
------- -------
.. method:: WIZNET5K.isconnected() This class implements most methods from `AbstractNIC <AbstractNIC>`, which are documented there. Additional methods are:
Returns ``True`` if the physical Ethernet link is connected and up.
Returns ``False`` otherwise.
.. method:: WIZNET5K.ifconfig([(ip, subnet, gateway, dns)])
Get/set IP address, subnet mask, gateway and DNS.
When called with no arguments, this method returns a 4-tuple with the above information.
To set the above values, pass a 4-tuple with the required information. For example::
nic.ifconfig(('192.168.0.4', '255.255.255.0', '192.168.0.1', '8.8.8.8'))
.. method:: WIZNET5K.regs() .. method:: WIZNET5K.regs()

View File

@ -107,7 +107,7 @@ Methods
Get or set general network interface parameters. These methods allow to work Get or set general network interface parameters. These methods allow to work
with additional parameters beyond standard IP configuration (as dealt with by with additional parameters beyond standard IP configuration (as dealt with by
`WLAN.ifconfig()`). These include network-specific and hardware-specific `AbstractNIC.ipconfig()`). These include network-specific and hardware-specific
parameters. For setting parameters, keyword argument syntax should be used, parameters. For setting parameters, keyword argument syntax should be used,
multiple parameters can be set at once. For querying, parameters name should multiple parameters can be set at once. For querying, parameters name should
be quoted as a string, and only one parameter can be queries at time:: be quoted as a string, and only one parameter can be queries at time::
@ -133,4 +133,20 @@ Methods
hostname The hostname that will be sent to DHCP (STA interfaces) and mDNS (if supported, both STA and AP). (Deprecated, use :func:`network.hostname` instead) hostname The hostname that will be sent to DHCP (STA interfaces) and mDNS (if supported, both STA and AP). (Deprecated, use :func:`network.hostname` instead)
reconnects Number of reconnect attempts to make (integer, 0=none, -1=unlimited) reconnects Number of reconnect attempts to make (integer, 0=none, -1=unlimited)
txpower Maximum transmit power in dBm (integer or float) txpower Maximum transmit power in dBm (integer or float)
pm WiFi Power Management setting (see below for allowed values)
============= =========== ============= ===========
Constants
---------
.. data:: WLAN.PM_PERFORMANCE
WLAN.PM_POWERSAVE
WLAN.PM_NONE
Allowed values for the ``WLAN.config(pm=...)`` network interface parameter:
* ``PM_PERFORMANCE``: enable WiFi power management to balance power
savings and WiFi performance
* ``PM_POWERSAVE``: enable WiFi power management with additional power
savings and reduced WiFi performance
* ``PM_NONE``: disable wifi power management

View File

@ -20,7 +20,7 @@ This class provides a driver for the WiFi network processor in the WiPy. Example
wlan.connect('your-ssid', auth=(WLAN.WPA2, 'your-key')) wlan.connect('your-ssid', auth=(WLAN.WPA2, 'your-key'))
while not wlan.isconnected(): while not wlan.isconnected():
time.sleep_ms(50) time.sleep_ms(50)
print(wlan.ifconfig()) print(wlan.ipconfig("addr4"))
# now use socket as usual # now use socket as usual
... ...
@ -96,16 +96,10 @@ Methods
In case of STA mode, returns ``True`` if connected to a WiFi access point and has a valid IP address. In case of STA mode, returns ``True`` if connected to a WiFi access point and has a valid IP address.
In AP mode returns ``True`` when a station is connected, ``False`` otherwise. In AP mode returns ``True`` when a station is connected, ``False`` otherwise.
.. method:: WLANWiPy.ifconfig(if_id=0, config=['dhcp' or configtuple]) .. method:: WLANWiPy.ipconfig('param')
WLANWiPy.ipconfig(param=value, ...)
With no parameters given returns a 4-tuple of *(ip, subnet_mask, gateway, DNS_server)*. See :meth:`AbstractNIC.ipconfig <AbstractNIC.ipconfig>`. Supported parameters are: ``dhcp4``, ``addr4``, ``gw4``.
if ``'dhcp'`` is passed as a parameter then the DHCP client is enabled and the IP params
are negotiated with the AP.
If the 4-tuple config is given then a static IP is configured. For instance::
wlan.ifconfig(config=('192.168.0.4', '255.255.255.0', '192.168.0.1', '8.8.8.8'))
.. method:: WLANWiPy.mode([mode]) .. method:: WLANWiPy.mode([mode])

View File

@ -24,7 +24,7 @@ For example::
print("Waiting for connection...") print("Waiting for connection...")
while not nic.isconnected(): while not nic.isconnected():
time.sleep(1) time.sleep(1)
print(nic.ifconfig()) print(nic.ipconfig("addr4"))
# now use socket as usual # now use socket as usual
import socket import socket
@ -113,8 +113,48 @@ parameter should be `id`.
connected to the AP. The list contains tuples of the form connected to the AP. The list contains tuples of the form
(MAC, RSSI). (MAC, RSSI).
.. method:: AbstractNIC.ipconfig('param')
AbstractNIC.ipconfig(param=value, ...)
Get or set interface-specific IP-configuration interface parameters.
Supported parameters are the following (availability of a particular
parameter depends on the port and the specific network interface):
* ``dhcp4`` (``True/False``) obtain an IPv4 address, gateway and dns
server via DHCP. This method does not block and wait for an address
to be obtained. To check if an address was obtained, use the read-only
property ``has_dhcp4``.
* ``gw4`` Get/set the IPv4 default-gateway.
* ``dhcp6`` (``True/False``) obtain a DNS server via stateless DHCPv6.
Obtaining IP Addresses via DHCPv6 is currently not implemented.
* ``autoconf6`` (``True/False``) obtain a stateless IPv6 address via
the network prefix shared in router advertisements. To check if a
stateless address was obtained, use the read-only
property ``has_autoconf6``.
* ``addr4`` (e.g. ``192.168.0.4/24``) obtain the current IPv4 address
and network mask as ``(ip, subnet)``-tuple, regardless of how this
address was obtained. This method can be used to set a static IPv4
address either as ``(ip, subnet)``-tuple or in CIDR-notation.
* ``addr6`` (e.g. ``fe80::1234:5678``) obtain a list of current IPv6
addresses as ``(ip, state, preferred_lifetime, valid_lifetime)``-tuple.
This include link-local, slaac and static addresses.
``preferred_lifetime`` and ``valid_lifetime`` represent the remaining
valid and preferred lifetime of each IPv6 address, in seconds.
``state`` indicates the current state of the address:
* ``0x08`` - ``0x0f`` indicates the address is tentative, counting the
number of probes sent.
* ``0x10`` The address is deprecated (but still valid)
* ``0x30`` The address is preferred (and valid)
* ``0x40`` The address is duplicated and can not be used.
This method can be used to set a static IPv6
address, by setting this parameter to the address, like ``fe80::1234:5678``.
.. method:: AbstractNIC.ifconfig([(ip, subnet, gateway, dns)]) .. method:: AbstractNIC.ifconfig([(ip, subnet, gateway, dns)])
.. note:: This function is deprecated, use `ipconfig()` instead.
Get/set IP-level network interface parameters: IP address, subnet mask, Get/set IP-level network interface parameters: IP address, subnet mask,
gateway and DNS server. When called with no arguments, this method returns gateway and DNS server. When called with no arguments, this method returns
a 4-tuple with the above information. To set the above values, pass a a 4-tuple with the above information. To set the above values, pass a
@ -127,7 +167,7 @@ parameter should be `id`.
Get or set general network interface parameters. These methods allow to work Get or set general network interface parameters. These methods allow to work
with additional parameters beyond standard IP configuration (as dealt with by with additional parameters beyond standard IP configuration (as dealt with by
`ifconfig()`). These include network-specific and hardware-specific `ipconfig()`). These include network-specific and hardware-specific
parameters. For setting parameters, the keyword argument parameters. For setting parameters, the keyword argument
syntax should be used, and multiple parameters can be set at once. For syntax should be used, and multiple parameters can be set at once. For
querying, a parameter name should be quoted as a string, and only one querying, a parameter name should be quoted as a string, and only one
@ -152,6 +192,7 @@ provide a way to control networking interfaces of various kinds.
network.WLANWiPy.rst network.WLANWiPy.rst
network.WIZNET5K.rst network.WIZNET5K.rst
network.LAN.rst network.LAN.rst
network.PPP.rst
Network functions Network functions
================= =================
@ -171,8 +212,8 @@ The following are functions available in the network module.
.. function:: hostname([name]) .. function:: hostname([name])
Get or set the hostname that will identify this device on the network. It is Get or set the hostname that will identify this device on the network. It will
applied to all interfaces. be used by all interfaces.
This hostname is used for: This hostname is used for:
* Sending to the DHCP server in the client request. (If using DHCP) * Sending to the DHCP server in the client request. (If using DHCP)
@ -182,8 +223,33 @@ The following are functions available in the network module.
If the function is called without parameters, it returns the current If the function is called without parameters, it returns the current
hostname. hostname.
A change in hostname is typically only applied during connection. For DHCP
this is because the hostname is part of the DHCP client request, and the
implementation of mDNS in most ports only initialises the hostname once
during connection. For this reason, you must set the hostname before
activating/connecting your network interfaces.
The length of the hostname is limited to 32 characters.
:term:`MicroPython ports <MicroPython port>` may choose to set a lower
limit for memory reasons. If the given name does not fit, a `ValueError`
is raised.
The default hostname is typically the name of the board. The default hostname is typically the name of the board.
.. function:: ipconfig('param')
ipconfig(param=value, ...)
Get or set global IP-configuration parameters.
Supported parameters are the following (availability of a particular
parameter depends on the port and the specific network interface):
* ``dns`` Get/set DNS server. This method can support both, IPv4 and
IPv6 addresses.
* ``prefer`` (``4/6``) Specify which address type to return, if a domain
name has both A and AAAA records. Note, that this does not clear the
local DNS cache, so that any previously obtained addresses might not
change.
.. function:: phy_mode([mode]) .. function:: phy_mode([mode])
Get or set the PHY mode. Get or set the PHY mode.

115
docs/library/openamp.rst Normal file
View File

@ -0,0 +1,115 @@
:mod:`openamp` -- provides standard Asymmetric Multiprocessing (AMP) support
============================================================================
.. module:: openamp
:synopsis: provides standard Asymmetric Multiprocessing (AMP) support
The ``openamp`` module provides a standard inter-processor communications infrastructure
for MicroPython. The module handles all of the details of OpenAMP, such as setting up
the shared resource table, initializing vrings, etc. It provides an API for using the
RPMsg bus infrastructure with the `Endpoint` class, and provides processor Life Cycle
Management (LCM) support, such as loading firmware and starting and stopping a remote
core, via the `RemoteProc` class.
Example usage::
import openamp
def ept_recv_callback(src, data):
print("Received message on endpoint", data)
# Create a new RPMsg endpoint to communicate with the remote core.
ept = openamp.Endpoint("vuart-channel", callback=ept_recv_callback)
# Create a RemoteProc object, load its firmware and start it.
rproc = openamp.RemoteProc("virtual_uart.elf") # Or entry point address (ex 0x081E0000)
rproc.start()
while True:
if ept.is_ready():
ept.send("data")
Functions
---------
.. function:: new_service_callback(ns_callback)
Set the new service callback.
The *ns_callback* argument is a function that will be called when the remote processor
announces new services. At that point the host processor can choose to create the
announced endpoint, if this particular service is supported, or ignore it if it's
not. If this function is not set, the host processor should first register the
endpoint locally, and it will be automatically bound when the remote announces
the service.
Endpoint class
--------------
.. class:: Endpoint(name, callback, src=ENDPOINT_ADDR_ANY, dest=ENDPOINT_ADDR_ANY)
Construct a new RPMsg Endpoint. An endpoint is a bidirectional communication
channel between two cores.
Arguments are:
- *name* is the name of the endpoint.
- *callback* is a function that is called when the endpoint receives data with the
source address of the remote point, and the data as bytes passed by reference.
- *src* is the endpoint source address. If none is provided one will be assigned
to the endpoint by the library.
- *dest* is the endpoint destination address. If the endpoint is created from the
new_service_callback, this must be provided and it must match the remote endpoint's
source address. If the endpoint is registered locally, before the announcement, the
destination address will be assigned by the library when the endpoint is bound.
.. method:: Endpoint.deinit()
Destroy the endpoint and release all of its resources.
.. method:: Endpoint.is_ready()
Returns True if the endpoint is ready to send (i.e., has both a source and destination addresses)
.. method:: Endpoint.send(src=-1, dest=-1, timeout=-1)
Send a message to the remote processor over this endpoint.
Arguments are:
- *src* is the source endpoint address of the message. If none is provided, the
source address the endpoint is bound to is used.
- *dest* is the destination endpoint address of the message. If none is provided,
the destination address the endpoint is bound to is used.
- *timeout* specifies the time in milliseconds to wait for a free buffer. By default
the function is blocking.
RemoteProc class
----------------
.. class:: RemoteProc(entry)
The RemoteProc object provides processor Life Cycle Management (LCM) support, such as
loading firmware, starting and stopping a remote core.
The *entry* argument can be a path to firmware image, in which case the firmware is
loaded from file to its target memory, or an entry point address, in which case the
firmware must be loaded already at the given address.
.. method:: RemoteProc.start()
Starts the remote processor.
.. method:: RemoteProc.stop()
Stops the remote processor. The exact behavior is platform-dependent. On the STM32H7 for
example it's not possible to stop and then restart the Cortex-M4 core, so a complete
system reset is performed on a call to this function.
.. method:: RemoteProc.shutdown()
Shutdown stops the remote processor and releases all of its resources. The exact behavior
is platform-dependent, however typically it disables power and clocks to the remote core.
This function is also used as the finaliser (i.e., called when ``RemoteProc`` object is
collected). Note that on the STM32H7, it's not possible to stop and then restart the
Cortex-M4 core, so a complete system reset is performed on a call to this function.

Some files were not shown because too many files have changed in this diff Show More