text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: BigQuery traceability labels were not available while using TaskGroup
Body: ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==12.0.0
### Apache Airflow version
2.9.3
### Operating System
Ubuntu
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
BigQueryInsertJobOperator set traceability job labels(airflow-dag & airflow-task) in BigQuery jobs. But, when using the BigQueryInsertJobOperator with TaskGroup, the traceability labels were not available
https://github.com/apache/airflow/commit/3e2bfb8b3ee80ddc18b00e461de53390dcc5a8b3
### What you think should happen instead
traceability_labels = {"airflow-dag": dag_label, "airflow-task": task_label}
The traceability labels should be available for BigQueryInsertJobOperator with TaskGroup
### How to reproduce
```
from datetime import datetime, timedelta
from airflow import DAG
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
from airflow.operators.dummy import DummyOperator
from airflow.utils.task_group import TaskGroup
default_args = {
"retries": 1,
"retry_delay": timedelta(minutes=1),
}
with DAG(
"test_dag",
default_args=default_args,
description="BQ test DAG",
schedule_interval=timedelta(days=1),
start_date=datetime(2025, 3, 2),
catchup=False,
) as dag:
start = DummyOperator(task_id="start")
with TaskGroup("taskgroup_1", tooltip="task group #1") as section_1:
t1 = BigQueryInsertJobOperator(
task_id="insert_job_1",
configuration={
"query": {
"query": "SELECT * FROM `project.dataset.table_1`;",
"useLegacySql": False,
} },
gcp_conn_id="google_cloud_default",
)
t2 = BigQueryInsertJobOperator(
task_id="insert_job_2",
configuration={
"query": {
"query": "SELECT * FROM `project.dataset.table_1`;",
"useLegacySql": False,
},
},
gcp_conn_id="google_cloud_default",
)
end = DummyOperator(task_id="end")
start >> section_1 >> end
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| 0easy
|
Title: Update ruff to 0.9.3
Body: ### Summary
Looking for a volunteer to update ruff to the latest version (0.9.3)
Files/lines to change:
```diff
diff --git a/pyproject.toml b/pyproject.toml
index ef3e198ff..3cffdb713 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -158,7 +158,7 @@ exclude = ["tests", "tests.*"]
[tool.ruff]
line-length = 100
target-version = "py39"
-required-version = "0.7.4"
+required-version = "0.9.3"
force-exclude = true
extend-include = ["*.ipynb"]
extend-exclude = [
diff --git a/requirements/lint-requirements.txt b/requirements/lint-requirements.txt
index bf647a510..9ce3d25b3 100644
--- a/requirements/lint-requirements.txt
+++ b/requirements/lint-requirements.txt
@@ -1,4 +1,4 @@
-ruff==0.7.4
+ruff==0.9.3
black[jupyter]==23.7.0
blacken-docs==1.18.0
pre-commit==4.0.1
```
### Notes
- Make sure to open a PR from a **non-master** branch.
- Sign off the commit using the `-s` flag when making a commit:
```sh
git commit -s -m "..."
# ^^ make sure to use this
```
- Include `#{issue_number}` (e.g. `#123`) in the PR description when opening a PR.
| 0easy
|
Title: Variable Types to Dataframe
Body: **Proposed feature**
It would be really useful to get a simple datafame containing the Variable name and the inferred variable type.
| 0easy
|
Title: Chat plugin alignment
Body: The text should be in the middle instead of top

| 0easy
|
Title: Launching docker container in interactive mode
Body: I was expecting `repo2docker https://github.com/binder-examples/conda-freeze /bin/bash` to give me a shell inside the container. Instead it builds the image and then exits straight away.
I think this is because we would have to do the equivalent of `docker run -it <image> /bin/bash` but don't. Should we add a `--interactive` CLI flag that turns on this behaviour?
**Note:** a solution to this is outlined in https://github.com/jupyter/repo2docker/issues/599#issuecomment-483402517. This would be a good way to get involved with contributing to repo2docker and something to help out with in general. | 0easy
|
Title: Move falcon.request_helpers.BoundedStream to its own module
Body: The ASGI version is in it's own module, so let's do the same thing on the WSGI side. It should make it a bit more on-topic to have it in a separate module, and make it easier for contributors to track it down. This will also let us "privatize" request_helpers in the future without affecting BoundedStream (which is obviously part of the public interface).
This is a breaking change, but shouldn't cause much trouble because apps generally just refer to this class via the req.bounded_stream instance.
Be sure to also update any relevant docs and include a news fragment noting the breaking change.
`falcon.request_helpers.BoundedStream ==> falcon.stream.BoundedStream`
| 0easy
|
Title: Support alternative Vector Databases
Body: **Is your feature request related to a problem? Please describe.**
Currently pinecone is the only supported Vector database. Other alternatives like Milvus, qdrant, deeplake, etc would allow a user to self-host their database and become part of the docker-compose stack to self-host more of the bots functionality.
**Describe the solution you'd like**
Refactor the current pinecone implementation, abstract the functionality either with a local handler for each alternative solution
**Describe alternatives you've considered**
Add more variables to the .env so that any of them could be enabled.
**Additional context**
I'm not sure if there would be a benefit or if potential issues would arise if multiple vector DB's were used simultaneously (eg, having both pinecone and Milvus being enabled)
| 0easy
|
Title: Remove import time from example scripts
Body: ### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Documentation
### Which Linux distribution did you use?
_No response_
### Which AutoKey GUI did you use?
Both
### Which AutoKey version did you use?
0.96.0 Beta 10
### How did you install AutoKey?
GitHub debs
### Can you briefly describe the issue?
### Can the issue be reproduced?
N/A
### What are the steps to reproduce the issue?
Just install AutoKey and view the sample scripts.
### What should have happened?
`Display Window Info` and `Phrase from selection` example scripts include `import time` statements that are not needed.
These statements should be removed.
### What actually happened?
_No response_
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_ | 0easy
|
Title: 批量出售,支付接口无法调用BUG
Body: 支付接口:当面付
下单一个,正常调用支付

下单多个,提示调用失败

| 0easy
|
Title: Support unicode categories in regex translation checks
Body: ### Describe the problem
I need to be able to force tighter constraints on some strings across languages that done use a strict a-z.
Specifically I want to restrict a string down to any letter, number, -, _, and 1-32 in length.
There's a shorthand regex that works for this:
```regex
^[-_\p{L}\p{N}\p{sc=Deva}\p{sc=Thai}]{1,32}$
```
This works using python regex (not re), but can't seem to be used with the existing translation flags (tested on webalte 5.9.2).
### Describe the solution you would like
I would like to be able to apply the following translation flag
```regex
regex:'^[-_\p{L}\p{N}\p{sc=Deva}\p{sc=Thai}]{1,32}$'
```
### Describe alternatives you have considered
DYI checks and rejecting invalid strings when we load the translation
### Screenshots

### Additional context
_No response_ | 0easy
|
Title: 'DataTree' object does not support the context manager protocol
Body: ### What is your issue?
```python
ds = xr.Dataset(
{"foo": (("x", "y"), np.random.rand(4, 5))},
coords={
"x": [10, 20, 30, 40],
"y": pd.date_range("2000-01-01", periods=5),
"z": ("x", list("abcd")),
},
)
ds.to_netcdf("saved_on_disk.nc")
# works as intended
with xr.open_dataset("saved_on_disk.nc") as ds:
print(ds)
# doesn't work
with xr.open_datatree("saved_on_disk.nc") as dtree:
print(dtree)
```
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[5], line 1
----> 1 with xr.open_datatree("saved_on_disk.nc") as dtree:
2 print(dtree)
TypeError: 'DataTree' object does not support the context manager protocol
```
| 0easy
|
Title: Inviting maintainers/contributors to the project
Body: Hello everyone,
First of all, I want to take a moment to thank all contributors and people who supported this project in any way ;) you are awesome!
If you like the project and have any interest in contributing/maintaining it, you can contact me here or send me a msg privately:
- Email: nidhalbacc@gmail.com
**PS: You need to be familiar with python and machine learning** | 0easy
|
Title: Fix paper reference
Body: The linked pdf file is not available anymore (404).
https://github.com/RaRe-Technologies/gensim/blob/70fc15985572bc10cd4424a1ac7fd8650389d43f/gensim/models/lsimodel.py#L325 | 0easy
|
Title: Add documentation for DictFactory and ListFactory
Body: `DictFactory` and `ListFactory` are defined, and exposed at the module toplevel. They are useful for simple dict/list building, and should be documented.
| 0easy
|
Title: time series forecasting: window features
Body: The transformer should create computations over windows of past values of the features, and populate them at time t, t being the time of the forecast.
It uses pandas rolling, outputs several comptutations, mean, max, std, etc, and pandas shift to move the computations to the right row.
```
tmp = (data[variables]
.rolling(window='3H').mean() # Average the last 3 hr values.
.shift(freq='1H') # Move the average 1 step forward
)
# Rename the columns
tmp.columns = [v + '_window' for v in variables]
data = data.merge(tmp, left_index=True, right_index=True, how='left')
```
| 0easy
|
Title: Rename redis-py health check message
Body: https://github.com/aio-libs/aioredis-py/blob/c82fe277a5a6521aa9baacd672dd131313fe1b45/aioredis/client.py#L3873
Should we change this to aioredis-py @seandstewart @abrookins in case someone uses both redis-py and aioredis? | 0easy
|
Title: enable copy paste for inline examples
Body: We need to add some CSS and JS so the Python inline examples (the ones that start with `>>>` can be copied). This is how a non-Python example looks like:

If the user clicks on the window, it'll copy the code. This is how a Python example looks like:

We should add the window-like CSS and the copy functionality - the CSS and JS is already there, we just need to connect them with these other types of examples.
Source: https://docs.ploomber.io/en/latest/api/_modules/tasks/ploomber.tasks.PythonCallable.html#ploomber.tasks.PythonCallable
| 0easy
|
Title: Document API with OpenAPI 3
Body: The general API documentation won't win any prices, we should improve (so it becomes usable) with something like [drf-spectacular](https://drf-spectacular.readthedocs.io/en/latest/) | 0easy
|
Title: 网站经 Cloudflare 节点后提示 502 错误
Body: 如题。域名经过 Cloudflare 节点。按开发者的文档部署系统之后访问网站,Cloudflare 提示 502 错误。我想解决这个问题。应该怎么办? | 0easy
|
Title: Make xonsh tolerant to inaccessible xonsh data paths
Body: <!--- Provide a general summary of the issue in the Title above -->
<!--- If you have a question along the lines of "How do I do this Bash command in xonsh"
please first look over the Bash to Xonsh translation guide: https://xon.sh/bash_to_xsh.html
If you don't find an answer there, please do open an issue! -->
## xonfig
<details>
My distro is actually NixOS.
```
$ xonfig
+------------------+--------------------+
| xonsh | 0.15.1 |
| Python | 3.11.8 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.43 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.17.2 |
| on posix | True |
| on linux | True |
| distro | unknown |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib | [] |
| RC file 1 | /etc/xonsh/xonshrc |
+------------------+--------------------+
```
</details>
## Expected Behavior
<!--- Tell us what should happen -->
Xonsh runs, though giving up logging command history to file.
People need a shell for data rescue if they mess up their filesystem and can only mount it read only.
## Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
Xonsh keeps throwing errors on an `open(...)` call and I cannot input anything except pressing \<c-C\> to exit the shell.
<!--- If part of your bug report is a traceback, please first enter debug mode before triggering the error
To enter debug mode, set the environment variable `XONSH_DEBUG=1` _before_ starting `xonsh`.
On Linux and OSX, an easy way to to do this is to run `env XONSH_DEBUG=1 xonsh` -->
### Traceback (if applicable)
<details>
I havn't yet tried readonly `~/.local/share/xonsh`, but I set `$XONSH_DATA_DIR` to `p"~/ro"` where I build up a readonly overlayfs, and got the following output (and now I understand why it is generating infinite logs: I set xonsh as my login shell and it keeps falling back to itself on error)
```
/etc/xonsh/xonshrc:16:16 - source-bash "/nix/store/ml2wn9nd7p5912ynfxq27dwlkh77qx1m-set-environment"
/etc/xonsh/xonshrc:16:16 + ![source-bash "/nix/store/ml2wn9nd7p5912ynfxq27dwlkh77qx1m-set-environment"]
Traceback (most recent call last):
File "/nix/store/3sgwcszkn0c5ab5nwm0k1iga4shj2ps0-python3-3.11.8-env/lib/python3.11/site-packages/xonsh/main.py", line 469, in main
args = premain(argv)
^^^^^^^^^^^^^
File "/nix/store/3sgwcszkn0c5ab5nwm0k1iga4shj2ps0-python3-3.11.8-env/lib/python3.11/site-packages/xonsh/main.py", line 409, in premain
env = start_services(shell_kwargs, args, pre_env=pre_env)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/3sgwcszkn0c5ab5nwm0k1iga4shj2ps0-python3-3.11.8-env/lib/python3.11/site-packages/xonsh/main.py", line 355, in start_services
_load_rc_files(shell_kwargs, args, env, execer, ctx)
File "/nix/store/3sgwcszkn0c5ab5nwm0k1iga4shj2ps0-python3-3.11.8-env/lib/python3.11/site-packages/xonsh/main.py", line 310, in _load_rc_files
XSH.rc_files = xonshrc_context(
^^^^^^^^^^^^^^^^
File "/nix/store/3sgwcszkn0c5ab5nwm0k1iga4shj2ps0-python3-3.11.8-env/lib/python3.11/site-packages/xonsh/environ.py", line 2492, in xonshrc_context
status = xonsh_script_run_control(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/3sgwcszkn0c5ab5nwm0k1iga4shj2ps0-python3-3.11.8-env/lib/python3.11/site-packages/xonsh/environ.py", line 2553, in xonsh_script_run_control
exc_info = run_script_with_cache(filename, execer, ctx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/3sgwcszkn0c5ab5nwm0k1iga4shj2ps0-python3-3.11.8-env/lib/python3.11/site-packages/xonsh/codecache.py", line 169, in run_script_with_cache
update_cache(ccode, cachefname)
File "/nix/store/3sgwcszkn0c5ab5nwm0k1iga4shj2ps0-python3-3.11.8-env/lib/python3.11/site-packages/xonsh/codecache.py", line 98, in update_cache
with open(cache_file_name, "wb") as cfile:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 30] Read-only file system: 'ro/xonsh_script_cache/nix/store/hy8zfhlx9rjwws4w62bjdk5ix9x550q1-etc-xonsh-xonshrc.cpython-311'
Xonsh encountered an issue during launch
Failback to /run/current-system/sw/bin/xonsh
```
</details>
## Steps to Reproduce
<!--- Please try to write out a minimal reproducible snippet to trigger the bug, it will help us fix it! -->
1. make ~/.local read only (maybe through mounting an overlayfs)
2. start xonsh
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: `df` annotation lacking in `find_replace` function of `functions.py` submodule
Body: # Brief Description
`df` annotation lacking in `find_replace` function of `functions.py` submodule.
It seems to be the only function for which there is no type annotation for the dataframe. Clearly not critical. | 0easy
|
Title: Minor refactoring suggestions to improve code quality
Body: Should be `poses.key()` not `poses.items()`
https://github.com/mithi/hexapod-robot-simulator/blob/5d0d7beb3e057ee50b36484cdb971477db8db59f/pages/helpers.py#L22 | 0easy
|
Title: Switch between search engines
Body: I was looking at the search_on_web method, and although you can set the search engine it uses, there's no way you can do it from the SearchInternetNode class. Are you guys planning to allow it using the node_config the same way as the max_results ?
answer = search_on_web(query=search_query, max_results=self.max_results) | 0easy
|
Title: Contributing guide should state forking the repo
Body: In [this section](https://github.com/ploomber/ploomber/blob/master/CONTRIBUTING.md#setup-with-conda), we state cloning the repo, prior to that the user should fork the repo into their accounts and only then fork that repo. | 0easy
|
Title: Validate field names to ensure vertex fields are correctly named in the schema
Body: Ensure that the schema satisfies the following two properties:
- If a field's name starts with `out_` or `in_`, it must be a vertex field.
- All non-root vertex fields must start with either `out_` or `in_`. | 0easy
|
Title: clear tasks button not refreshing the UI
Body: After clicking `Clear tasks` the UI in the sidebar is not refreshed. | 0easy
|
Title: Refactor large test modules
Body: Some test modules (e.g. `test_admin_integration.py`) have become large and are hard to edit.
They should be broken up into smaller modules.
v3.3.1 | 0easy
|
Title: Upgrade klog to v2
Body: ### What you would like to be added?
I would like to upgrade `klog` version to v2
### Why is this needed?
As we discussed in https://github.com/kubeflow/katib/pull/2463#discussion_r1879643488, a dedicated issue for upgrading `klog` version is needed.
And also, we rely on both v1 and v2 version, which is not neat enough and may need to be unified into the same version.
https://github.com/kubeflow/katib/blob/336396436aa49de730887456028e3daa1465e500/go.mod#L30
https://github.com/kubeflow/katib/blob/336396436aa49de730887456028e3daa1465e500/go.mod#L153
cc👀 @kubeflow/wg-automl-leads @tariq-hasan @helenxie-bit @doris-xm
/remove-label lifecycle/needs-triage
/help-wanted
/good-first-issue
### Love this feature?
Give it a 👍 We prioritize the features with most 👍 | 0easy
|
Title: remove "force" parameter from identifierpreparer.quote, quote_schema in alembic/ddl/base.py
Body: The "force" parameter has had [no effect since 0.9](https://docs.sqlalchemy.org/en/latest/core/internals.html?highlight=quote#sqlalchemy.sql.compiler.IdentifierPreparer.quote) and I was going to remove it entirely in 1.3, though at the moment I'm going to just have it emit a deprecation warning. this will still cause alembic's tests to fail on the warning. | 0easy
|
Title: docs: update CNAME information
Body: > If you create a CNAME record, its presence will cause other RRsets of the same name to be hidden (“occluded”) from the public (i.e. in responses to DNS queries). This is per RFC 1912.
> However, as far as the API is concerned, you can still retrieve and manipulate those additional RRsets. In other words, CNAME-induced hiding of additional RRsets does not apply when looking at the zone through the API.
> It is currently possible to create a CNAME RRset with several records. However, this is not legal, and the response to queries for such RRsets is undefined. In short, don’t do it.
etc.
This is outdated.
Similarly, the explanation on how bulk request validation and error responses are done may not be completely up to date. | 0easy
|
Title: Check that we are doing proper file extension validation for all data types
Body: [Here](https://github.com/docarray/docarray/pull/1155) we make sure to do proper file extension validation for audio files paths, meaning:
- we have an exhaustive list of all audio file extensions
- if a different extension is given to the auio url, fail validation
- no _no_ extension is given, pass
We need to apply the same logic to all other modalities as well, such as image, text, video, ... | 0easy
|
Title: 没问题
Body: 1. BUG反馈请描述最小复现步骤
2. 普通问题:99%的答案都在帮助文档里,请仔细阅读https://kmfaka.baklib-free.com/
3. 新功能新概念提交:请文字描述或截图标注 | 0easy
|
Title: Improve version comparison in check_schema_version util function
Body: In the `check_schema_version` utility function, there is custom code to determine whether saved schema versions are older or newer than the current schema version. This comparison could likely be simplified significantly by using the packaging library performing the version comparison instead of the custom code.
Current code:
```
current = SCHEMA_VERSION.split(".")
saved = version_string.split(".")
warning_text_upgrade = (
"The schema version of the saved %s"
"(%s) is greater than the latest supported (%s). "
"You may need to upgrade featuretools. Attempting to load %s ..."
% (cls_type, version_string, SCHEMA_VERSION, cls_type)
)
for c_num, s_num in zip_longest(current, saved, fillvalue=0):
if c_num > s_num:
break
elif c_num < s_num:
warnings.warn(warning_text_upgrade)
break
warning_text_outdated = (
"The schema version of the saved %s"
"(%s) is no longer supported by this version "
"of featuretools. Attempting to load %s ..."
% (cls_type, version_string, cls_type)
)
# Check if saved has older major version.
if current[0] > saved[0]:
logger.warning(warning_text_outdated)
```
If we use packaging as described here (https://stackoverflow.com/a/11887885) we could simplify the comparisons to something similar to `version.parse(saved) < version.parse(current)` and avoid splitting the version into components and looping over the version lists. | 0easy
|
Title: Easier tutorial for beginners!
Body: I am new to the open clip, is there any easier way to quickly start training open clip?I am really confused about the Training part of readme.md... Especially there are so many parallel parts in Training part such as test and development. Also I don't know the format of dataset in Sample single-process running code. | 0easy
|
Title: Feature: add `broker.publisher(filter=...)` option
Body: We should make publisher call optional in a cases where user should suppress an answer for one publisher and still publisher for another one
Example:
```python
@broker.subscriber("in")
@broker.publisher("out1")
@broker.publisher("out2")
async def handler():
if ...:
raise SkipMessage # skip both publishers
else:
return "result" # call both publishers
```
And with `filter` API:
```python
@broker.subscriber("in")
@broker.publisher(
"out1",
filter=lambda data: bool(data) # call it if the message is not None
)
@broker.publisher("out2") # call it always
async def handler():
if ...:
return "result"
```
I think, publisher filter should be one of the following signatures (related to #1431):
```python
def pub_filter(data: Response) -> bool:
...
async def pub_filter(data: Response) -> bool:
...
``` | 0easy
|
Title: Add setting to disable the broken website alert
Body: ### Describe the issue
We're getting alerts on pretty much all components in https://l10n.xwiki.org related to Broken project website, with the following error:
> (403 Client Error: Forbidden for url: https://www.xwiki.org/xwiki/bin/view/Main/WebHome)
I think the issue is that our sysadmins configured www.xwiki.org to prevent some bots to access it and so it's blocking also Weblate checks. Now for us this check is completely useless as we control the components we create, so I set `WEBSITE_REQUIRED=False` in settings and I manually triggered the tasks `repository_alerts` and `component_alerts`, but I'm still having the alert displayed. And I don't find a way to globally dismiss it.
### I already tried
- [X] I've read and searched [the documentation](https://docs.weblate.org/).
- [X] I've searched for similar filed issues in this repository.
### Steps to reproduce the behavior
1. Create a project and indicate the URL https://www.xwiki.org
2. See that an alert is created for broken project website
3. Check that the website is actually available
### Expected behavior
Website availability should be checked with some specific documented headers so that it's possible to filter them out to not consider them as bots.
Also it should be possible to entirely ignore those alerts with some configuration, and if possible to be able to bulk dismiss alerts.
### Screenshots

### Exception traceback
_No response_
### How do you run Weblate?
PyPI module
### Weblate versions
Weblate: 5.9.2
Django: 5.1.4
siphashc: 2.3
translate-toolkit: 3.14.5
lxml: 5.3.0
pillow: 11.1.0
nh3: 0.2.15
python-dateutil: 2.8.2
social-auth-core: 4.5.1
social-auth-app-django: 5.4.2
django-crispy-forms: 2.3
oauthlib: 3.2.2
django-compressor: 4.4
djangorestframework: 3.15.2
django-filter: 24.3
django-appconf: 1.0.6
user-agents: 2.2.0
filelock: 3.16.1
RapidFuzz: 3.11.0
openpyxl: 3.1.2
celery: 5.4.0
django-celery-beat: 2.7.0
kombu: 5.3.4
translation-finder: 2.19
weblate-language-data: 2024.16
html2text: 2024.2.26
pycairo: 1.25.1
PyGObject: 3.46.0
diff-match-patch: 20241021
requests: 2.32.3
django-redis: 5.4.0
hiredis: 2.2.3
sentry-sdk: 2.19.2
Cython: 3.0.11
mistletoe: 1.4.0
GitPython: 3.1.40
borgbackup: 1.2.7
pyparsing: 3.1.1
ahocorasick_rs: 0.22.1
python-redis-lock: 4.0.0
charset-normalizer: 3.3.2
cyrtranslit: 1.1.1
drf-spectacular: 0.28.0
Python: 3.11.2
Git: 2.39.5
psycopg: 3.1.17
psycopg-binary: 3.1.17
phply: 1.2.6
ruamel.yaml: 0.17.40
tesserocr: 2.6.2
boto3: 1.28.85
aeidon: 1.15
iniparse: 0.5
google-cloud-translate: 3.19.0
openai: 1.59.7
Mercurial: 6.9
git-review: 2.4.0
PostgreSQL server: 15.10
Database backends: django.db.backends.postgresql
PostgreSQL implementation: psycopg3 (binary)
Cache backends: default:RedisCache, avatar:FileBasedCache
Email setup: django.core.mail.backends.smtp.EmailBackend: localhost
OS encoding: filesystem=utf-8, default=utf-8
Celery: redis://localhost:6379, redis://localhost:6379, regular
Platform: Linux 6.1.0-17-amd64 (x86_64)
### Weblate deploy checks
_No response_
### Additional context
_No response_ | 0easy
|
Title: [Feature request] Add apply_to_images to AtLeastOneBBoxRandomCrop
Body: | 0easy
|
Title: Allow generate command to specify which generator to use in prisma schema
Body: ## Problem
My python generator is in the same file I use for my js client generator. Running `prisma py generate` fails when I have third party JS generators that are not installed with npm. This is an issue when I try to generate the python prisma client on a container image that doesn't need to generate the third party generators. Currently, I just comment out the third party generators as a work around.
## Suggested solution
`prisma generate` accepts a `--generator` argument that `Specifies which generator to use to generate assets.`. By exposing this to `prisma py generate`, I'd be able to specify that I only want to build the python generator and skip any other irrelevant generator.
## Alternatives
Looks like prisma already has this built in! Just need to be exposed to the python cli script.
| 0easy
|
Title: add action to post binder link on each PR
Body: we want to create a github action that posts a link to binder whenever someone opens a PR, this will allow us to easily test things
some notes:
1. since this requires a github token, I believe the trigger should be pull_request_target
2. the host should be binder.ploomber.io
3. we also need to add an environment.yml, similar to [this one](https://github.com/ploomber/jupysql/blob/master/environment.yml)
4. and a [postBuild](https://github.com/ploomber/jupysql/blob/master/postBuild) like this
more information
https://mybinder.readthedocs.io/en/latest/
https://mybinder.org/
some related code:
https://github.com/jupyterlab/maintainer-tools/blob/main/.github/actions/binder-link/action.yml | 0easy
|
Title: Deprecation warning in doc page for Accordion layout
Body: Page https://panel.holoviz.org/reference/layouts/Accordion.html
Deprecation warning in text:
BokehDeprecationWarning: 'square() method' was deprecated in... use scatter(... instead. | 0easy
|
Title: add learning rate schedules for DL
Body: | 0easy
|
Title: [DOC] Add documentation on adding to rst files
Body: # Brief Description of Fix
Currently, the docs provide only a few lines on instructions for building and changing the docs locally. It's not clear how to find what you build locally in the directory, how to makes changes in the rst files, or how to rebuild and check on your changes.
I would like to propose a change, such that now the docs have more detail and hyperlinks on how the docs are built, what markup language to use, and how to make changes. Hopefully this will make it easier for contributors to add to the docs.
# Relevant Context
- [Link to documentation page](http://pyjanitor.readthedocs.io)
- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/CONTRIBUTING.rst)
I would like to work on this!
| 0easy
|
Title: iPython notebook keyboard interrupt breaks cell when using Halo
Body: <!-- Please use the appropriate issue title format:
BUG REPORT
Bug: {Short description of bug}
SUGGESTION
Suggestion: {Short description of suggestion}
OTHER
{Question|Discussion|Whatever}: {Short description} -->
## Description
Currently, in an iPython notebook, if a user does not handle keyboard interrupt and the spinner is running, a `KeyboardInterrupt` is thrown but the spinner keeps spinning.
### System settings
- Operating System: MacOS
- Terminal in use: iTerm 2
- Python version: 2.7.10
- Halo version: 0.0.8
### Error
<!-- Put error here. Exceptions, and full traceback if possible. -->
### Expected behaviour
```
In [5]: import time
...: import random
...:
...: spinner.start()
...: for i in range(100):
...: spinner.text = '{0}% Downloaded dataset.zip'.format(i)
...: time.sleep(random.random())
...: spinner.succeed('Downloaded dataset.zip')
...:
⠹ 4% Downloaded dataset.zip^C---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
<ipython-input-5-81a625f23371> in <module>()
5 for i in range(100):
6 spinner.text = '{0}% Downloaded dataset.zip'.format(i)
----> 7 time.sleep(random.random())
8 spinner.succeed('Downloaded dataset.zip')
9
KeyboardInterrupt:
⠼ 4% Downloaded
```
## Steps to recreate
Following snippet should recreate the error.
```
In [1]: from halo import Halo
In [2]: spinner = Halo(text='Downloading dataset.zip', spinner='dots')
In [3]: import time
...: import random
...:
...: spinner.start()
...: for i in range(100):
...: spinner.text = '{0}% Downloaded dataset.zip'.format(i)
...: time.sleep(random.random())
...: spinner.succeed('Downloaded dataset.zip')
```
## People to notify
@ManrajGrover
| 0easy
|
Title: [ENH] expose fitted parameters of `AutoETS` via `get_fitted_params`
Body: The fitted parameters of `AutoETS` should be visible via `get_fitted_params`.
For this, at the end of `_fit`, we need to write the attributes we want to be visible to `self.paramname_`, ending in an underscore.
@ericjb has already found some parameters that we could write, though I am not certain about whether there are more, see here: https://github.com/sktime/sktime/discussions/7184
FYI @ericjb | 0easy
|
Title: resample trajectory and hyp.tools.resample
Body: While animated line plots are resampled, static plot are not. We could add a `resample` flag to the `plot` function that allows a user to upsample (or downsample) their timeseries data. Then, we could expose the function as `hyp.tools.resample`.
questions:
+ this only makes sense for timeseries data (I think), so do we allow the user to resample non-timeseries data?
+ how should the user interact with the flag? i.e. does 10 mean upsample by a factor of 10? what about downsampling?
| 0easy
|
Title: Add a Configuration Switch for Crypto Pairs
Body: I use CCXT along with Cryptofeed. CCXT uses '/' between crypto pairs while Cryptofeed uses '-'. Could you please make this something adjustable so that I can line up the pairs in my DB and avoid all the millions of updates/calls to replace the '-' with '/'?
| 0easy
|
Title: Option to run PretrainedTransformer in eval mode
Body: While setting `train_parameters` to `False` very often we also may consider disabling dropout/batchnorm, in other words, to run the pretrained model in eval mode.
We've done a little modification to `PretrainedTransformerEmbedder` that allows providing whether the token embedder should be forced to `eval` mode during the training phase.
Do you this feature might be handy? Should I open a PR?
| 0easy
|
Title: Better website
Body: The website (https://elesiuta.github.io/picosnitch/) is literally just a copy of the readme with one of the default themes for GitHub Pages. Both the layout and content could use a change.
It should probably focus on just highlighting the features with some screenshots/gifs, and direct users to the repo (https://github.com/elesiuta/picosnitch) for more details and installation/instructions.
The GitHub Pages config is stored in `docs/_config.yml` and the page is in `docs/index.md`. | 0easy
|
Title: Document where `log-<index>.js` files created by `--splitlog` are saved
Body: Currently, when using `--splitlog` command, it will generate additional JS files. When the output.xml is large, a bunch of *.js files are generated, which makes the current directory flooded.
**Needs:**
Ability to group them into a sub-folder to tidy up the directory.
E.g
`rebot --splitlog ./resource ./output.xml` | 0easy
|
Title: Baseline support
Body: Once https://github.com/python-security/pyt/pull/100 is merged this will be do-able.
So a baseline is for when you want to diff between a previous run, (probably of known issues or false-positives) and a current run, 'as a big part of continuous integration', baseline support is super important.
See https://github.com/openstack/bandit as a tool that implements this.
```python
parser.add_argument('-b', '--baseline',
help='path of a baseline report to compare against '
'(only JSON-formatted files are accepted)',
type=str,
default=None)
```
There is also the newly open sourced [`detect-secrets` repo](https://github.com/Yelp/detect-secrets/blob/b16acf1e8dc1e05366a9bfbd7ce35ed611adb94d/detect_secrets/core/baseline.py#L41) from the Yelp security team that implements this. | 0easy
|
Title: error when finding pipeline.yaml in root directory
Body: the recursive function looks at .parent[0] but the root directory does not have a parent | 0easy
|
Title: Word2Vec docs indicate sents can be a generator, but it can't
Body: <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
Tried to pass a generator as the `sentences` argument when training Word2Vec, as suggested [in the docs](https://radimrehurek.com/gensim/models/word2vec.html#usage-examples:~:text=sentences%20can%20be%20a%20generator). But then I get this error:
TypeError: You can't pass a generator as the sentences argument. Try a sequence.
#### Steps/code/corpus to reproduce
```
from gensim.models import Word2Vec
def get_sents():
for ii in range(100):
yield ['hi']
sents = get_sents()
model = Word2Vec(sents, size=100, window=5, min_count=1, workers=4)
```
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import struct; print("Bits", 8 * struct.calcsize("P"))
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
```
Linux-5.8.5-arch1-1-x86_64-with-glibc2.2.5
Python 3.8.5 (default, Jul 27 2020, 08:42:51)
[GCC 10.1.0]
Bits 64
NumPy 1.19.1
SciPy 1.4.1
gensim 3.8.3
FAST_VERSION 1
```
| 0easy
|
Title: Precision Recall Curve
Body: **Describe the solution you'd like**
For imbalanced classification problems it would be wonderful to have a precision recall plot. This would make a nice companion to ROC/AUC plots.
http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html#sphx-glr-auto-examples-model-selection-plot-precision-recall-py | 0easy
|
Title: Add PartOfDay primitive
Body: - Add a primitive that determine what part of the day a certain datetime falls in:
```python
def day_part(hour):
if hour in [4,5]:
return "dawn"
elif hour in [6,7]:
return "early morning"
elif hour in [8,9,10]:
return "late morning"
elif hour in [11,12,13]:
return "noon"
elif hour in [14,15,16]:
return "afternoon"
elif hour in [17, 18,19]:
return "evening"
elif hour in [20, 21, 22]:
return "night"
elif hour in [23,24,1,2,3]:
return "midnight"
``` | 0easy
|
Title: Additional Evaluation Metric / Imbalanced Problems
Body: There are several evaluation metrics that would be particularly beneficial for (binary) imbalanced classification problems and would be greatly appreciated additions. In terms of prioritizing implementation (and likely ease of implementation I will rank-order these):
1. **AUCPR** - helpful in the event that class labels are needed and the positive class is of greater importance.
2. **F2 Score** - helpful in the event that class labels are needed but false negatives are more costly.
3. **F0.5 Score** - helpful in the event class labels are needed but false positives are more costly. | 0easy
|
Title: Restyle the documentation landing page
Body: This is a good issue for anyone with markdown experience who enjoys the styling/design aspect rather than writing copy. There's no knowledge of Vizro required.
The current landing page for documentation is rather bland https://vizro.readthedocs.io/en/stable/ and the presentation could potentially be improved.
It would be great to have some styling to improve it. We can start with the copy we have (although we do need to add a section to link to the demo example in the repo). Some basic redesign is all that is needed to boost the look and feel. | 0easy
|
Title: Toast messages
Body: Or at least some UI independent definition of "show something to the user after something has happened, even if they've navigated away from that page". | 0easy
|
Title: Move BrownCorpus from word2vec to gensim.corpora
Body: Not a high-priority at all, but it'd be more sensible for such a tutorial/testing utility corpus to be implemented elsewhere - maybe under `/test/` or some other data- or doc- related module – rather than in `gensim.models.word2vec`.
_Originally posted by @gojomo in https://github.com/RaRe-Technologies/gensim/pull/2939#discussion_r493820305_ | 0easy
|
Title: [ENH] in `TimesFMForecaster`, `context_len` and `horizon_len` should be set automatically per default
Body: `TimesFMForecaster` should set `context_len` and `horizon` len to a reasonable value, automatically per default.
The `horizon_len` can be obtained from `fh`, but it makes sense to give the user the option to override.
The `context_len` can also be set to an automatic choice, in the default. Again, giving the user the option to override makes sense.
FYI @geetu040.
As a recipe to change this, there needs to be an internal `self._context_len` and `self._horizon_len` which is determined by the settings. | 0easy
|
Title: Classifier Evaluation in Examples Notebook
Body: Classifier Evaluation imports in `examples.ipynb`: `from sklearn.cross_validation import train_test_split` has been deprecated and issues a warning.
### Proposal/Issue
Update `import sklearn.cross_validation` to `import sklearn.model_selection`
Will remove warning:
```
/usr/local/lib/python3.6/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
```
### Code Snippet
```python
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
``` | 0easy
|
Title: Sort experiments by descending creation date by default
Body: ### What you would like to be added?
Showing experiments from newest to oldest seems more likely to show relevant experiments compared to sorting them by name. It should be the default.
### Why is this needed?
Usability
### Love this feature?
Give it a 👍 We prioritize the features with most 👍 | 0easy
|
Title: Update WMD documentation
Body: A user reported a documentation issue on the mailing list: https://groups.google.com/g/gensim/c/8nobtm9tu-g.
The report shows two problems:
1) Something changed with `wmdistance` between 3.8 and 4.0 that is not properly captured in the [Migration notes](https://github.com/RaRe-Technologies/gensim/wiki/Migrating-from-Gensim-3.x-to-4).
2. The [WMD tutorial](https://radimrehurek.com/gensim_4.0.0/auto_examples/tutorials/run_wmd.html#normalizing-word2vec-vectors) contains instructions that are now outdated in 4.0:
> When using the wmdistance method, it is beneficial to normalize the word2vec vectors first, so they all have equal length. To do this, simply call model.init_sims(replace=True) and Gensim will take care of that for you.
…And then our own tutorial logs a `WARNING : destructive init_sims(replace=True) deprecated & no longer required for space-efficiency`, which looks silly. | 0easy
|
Title: BUG: Series constructor from dictionary drops key (index) levels when not all keys have same number of entries
Body: ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
pd.Series({("l1",):"v1", ("l1","l2"): "v2"})
# Out[30]:
# l1 v1
# l1 v2
# dtype: object
# the reason is that the Series constructor uses internally MultiIndex.from_tuples in the following way (note that the input is a tuple of tuples!):
pd.MultiIndex.from_tuples((("l1",), ("l1","l2")))
# Out[32]:
# MultiIndex([('l1',),
# ('l1',)],
# )
# compare to the following which produces the expected result:
pd.MultiIndex.from_tuples([("l1",), ("l1","l2")])
# Out[33]:
# MultiIndex([('l1', nan),
# ('l1', 'l2')],
# )
# Note: this was tested with latest release and current master
```
### Issue Description
When calling the `Series` constructor with a dict where the keys are tuples, a series with `MulitIndex` gets created. However, if the number of entries in the keys is not the same, key entries from keys with more than the minimum number get dropped. This is in several ways problematic, especially if this produces duplicated index values / keys which is not expected because it was called with a dict (which has per definition unique keys).
### Expected Behavior
The `MultiIndex` of the new series has nan-padded values.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.16
python-bits : 64
OS : Linux
OS-release : 6.8.0-51-generic
Version : #52~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Dec 9 15:00:52 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : de_DE.UTF-8
LOCALE : de_DE.UTF-8
pandas : 2.2.3
numpy : 2.2.1
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| 0easy
|
Title: st2ctl in Docker environment
Body: `st2ctl` assumes it's running in a VM environment and wasn't designed to be executed inside the Docker. We'll need to improve that.
Because listing processes or starting services is not viable in a Docker environment, where each service runs in its own Docker container, `st2ctl` needs a simple modification so it doesn't try to query list of processes but instead suggest to use Docker cmd.
Example commands which don't work in Docker environment:
* `st2ctl status`
* `st2ctl stop`
* `st2ctl start`
* `st2ctl restart`
* `st2ctl restart-component`
* `st2ctl reopen-log-files`
* `st2ctl reload` lists running processes as well
| 0easy
|
Title: Add sklearn.ensemble.HistGradientBoostingRegressor
Body: This should be easy since we already have `HistGradientBoostingClassifier` there. | 0easy
|
Title: Remove stated support for Django<1.8
Body: We don't have a clear support policy, but the Django team no longer supports Django<1.8; let's remove it as well.
This means:
- Removing it from the tox.ini matrix
- Removing it from the README.rst and other docs, if needed
- Finding any code that looks for ``django.version`` and remove code paths used for Django<1.8 | 0easy
|
Title: Add FAQ section on parametrizing notebooks
Body: Notebook tasks require a `parameters` cell. If they don't have one, we raise the following error:
```pytb
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/sources/notebooksource.py", line 244, in _post_init_validation
raise SourceInitializationError(msg)
ploomber.exceptions.SourceInitializationError: Notebook "/Users/Edu/dev/ml-testing/analysis.ipynb" does not have a cell tagged "parameters". Add a cell at the top and tag it as "parameters". Click here for instructions: https://papermill.readthedocs.io/en/stable/usage-parameterize.html
```
As you can see, we link to: https://papermill.readthedocs.io/en/stable/usage-parameterize.html
However, papermill's docs only cover ipynb files, not py so we need to write our own help page
Add file here: https://github.com/ploomber/ploomber/tree/master/doc/user-guide/faq | 0easy
|
Title: replace concat in encoders by y_train.groupby(X_train["variable"]).mean().sort_values()
Body: Applies to the following encoders:
- OrdinalEncoder
- MeanEncoder
- PRatioEncoder
- WoEEncoder
Currently we concatenate the target to the predictors (X_train) to determine the mappings. This is not necessary. We can determined them automatically with syntax like this:
y_train.groupby(X_train["A7"]).mean().sort_values()
therefore removing one unnecessary step in the calculation, the pd.concat step | 0easy
|
Title: [Feature request] Add apply_to_images to MaxSizeTransform
Body: | 0easy
|
Title: Add `figsize`, `title_fontsize`, and `text_fontsize` parameter for existing plots.
Body: As discussed in #11 and pointed out by @frankherfert , plots generated by scikit-plot are quite small on Jupyter notebook. Adding a `figsize`, `title_fontsize`, and `text_fontsize` parameter will let the user adjust the size of the plot based on his/her preferences.
`figsize` should accept a 2-tuple and be kept at a default value of None and passed to the `plt.subplots` function during figure creation. `title_fontsize` should be kept to default value "large" and `text_fontsize` to default value "medium" | 0easy
|
Title: Add support for `classification_report`
Body: <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Is your feature request related to a problem? Please describe.**
Implements `classification_report` for classification metrics.(https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html)
| 0easy
|
Title: Allow for a way to let all users use the bot
Body: Currently, only users with the roles defined in `ALLOWED_ROLES` in `.env` will be able to use the bot, add a way to default it such that all users can use the bot (e.g, if no ALLOWED_ROLES, then everyone can use), open to any implementation/suggestions on this | 0easy
|
Title: document early stop
Body: raising this:
https://github.com/ploomber/ploomber/blob/2fe474987ae6fca088e02399dbff99672a17fc95/src/ploomber/exceptions.py#L83
will exit the DAG gracefully, but it's undocumented | 0easy
|
Title: Add Jupyter notebook examples for plot_cumulative_gain and plot_lift_curve
Body: Add Jupyter notebook examples for `metrics.plot_cumulative_gain` and `metrics.plot_lift_curve` | 0easy
|
Title: DEPENCES 依赖项不全
Body: **What could be better?**
依赖项的 requ…….txt 应该被更新,而且 pytorch 更要描述具体版本和安装方式
本人配环境一个下午,勉强配出来 `Tag0.0.1 pytorch 1.9.0 OnlyCpu` 的版本。
@babysor @oceanarium | 0easy
|
Title: WHILE limit doesn't work in teardown
Body: ### Description
My original source is test run failure. Tests ran using a docker container and run stopped when container exited with code 137. Trying to simplify the test that lead to the steps given below. I'm seeing execution get stuck in Test Teardown.
### Steps to reproduce
1. Open terminal and create a test suite file `stuck.robot` with content
```
*** Keywords ***
My Teardown
WHILE True limit=3
Log To Console Sleep 1 second
Sleep 1
END
*** Test Cases ***
My Test
No Operation
[Teardown] My Teardown
```
2. Run test: `robot stuck.robot`
### Actual results
Robot output looks normal until while loop limit is reached:
```
==============================================================================
Stuck
==============================================================================
My Test .Sleep 1 second
Sleep 1 second
Sleep 1 second
```
Then robot appears stuck. Host resource monitor indicates high CPU usage. It is possible to force exit by hitting `CTRL+C` more than once.
### Reproducibility
Reproduced 10 times out of 10 on my local PC (first one in the list below). On other systems, reproduced on first try.
### Tested versions
* Windows 11 PC running WSL: Ubuntu 20.04.6 LTS (GNU/Linux 5.15.90.1-microsoft-standard-WSL2 x86_64)
* Robot Framework 6.0.2 , Python 3.8.10
* Robot Framework 6.1a1 , Python 3.8.10
* Ubuntu 22.04 LTS system
* _need to collect version info later_
* MacBook Pro
* Robot Framework 6.0.2, Python 3.10.5
| 0easy
|
Title: update misc workflows datalab tutorial non-IID section to show overall score first and then per-example issues
Body: Re-order this section:
https://docs.cleanlab.ai/master/tutorials/datalab/workflows.html#Detect-if-your-dataset-is-non-IID
Also to get the overall score, we should use `get_issue_summary()` as opposed to going into `.info()` attribute. We want to avoid going into that attribute, except if strictly necessary.
Finally, add a sentence re-emphasizing that the non-IID issue is primarily a property of the overall dataset as opposed to individual data points, and the `get_issues()` scores + plot are just there to help you understand more when your dataset has a low overall non-IID p-value. | 0easy
|
Title: google analytics support
Body: | 0easy
|
Title: Update the scale bar example for the latest API
Body: ## 🧰 Task
The scale bar example
https://napari.org/dev/gallery/scale_bar.html
Doesn't cover the latest scale bar API, which permits a lot more.
It should be updated to include some of the new options:
https://github.com/napari/napari/blob/0e01840c9237489acf238aec1d5cdaf9201ed640/napari/components/overlays/scale_bar.py#L10-L46
Would be good to also update the docs, see:
https://github.com/napari/docs/issues/38
Motivation:
User post on image.sc
https://forum.image.sc/t/increasing-font-size-and-repositioning-the-scale-bar/97972/3
| 0easy
|
Title: Fixing warning errors in logs
Body: The bot works great! Thank you for creating such an amazing software.
<img width="1282" alt="CleanShot 2022-12-19 at 10 30 58@2x" src="https://user-images.githubusercontent.com/9989650/208405561-e15bc570-a75f-4c5e-9f7b-49f9bbdd9a36.png">
When I monitor logs, I've seen that there are some cases that needs error handling. I just want to share them if you mind to fix them. | 0easy
|
Title: Support Settings Configuration through CLI and environment Variables
Body: Right now settings are just passed in through the settings file. This is somewhat limiting if there is a value that will be dynamic with deployments (such as a docker image file, etc.) We should support accepting settings through environment variables, the CLI, the current settings file, and any mix of those.
The order of precedence with mixed should be CLI -> environment -> config file.
If possible, the implementation of this ticket would not be to enumerate each settings value for each of these sources, but to have some way of dynamically finding zappa settings and configuring it. | 0easy
|
Title: Contribute `Lollipop chart` to Vizro visual vocabulary
Body: ## Thank you for contributing to our visual-vocabulary! 🎨
Our visual-vocabulary is a dashboard, that serves a a comprehensive guide for selecting and creating various types of charts. It helps you decide when to use each chart type, and offers sample Python code using [Plotly](https://plotly.com/python/), and instructions for embedding these charts into a [Vizro](https://github.com/mckinsey/vizro) dashboard.
Take a look at the dashboard here: https://huggingface.co/spaces/vizro/demo-visual-vocabulary
The source code for the dashboard is here: https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary
## Instructions
0. Get familiar with the dev set-up (this should be done already as part of the initial intro sessions)
1. Read through the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary) of the visual vocabulary
2. Follow the steps to contribute a chart. Take a look at other examples. This [commit](https://github.com/mckinsey/vizro/pull/634/commits/417efffded2285e6cfcafac5d780834e0bdcc625) might be helpful as a reference to see which changes are required to add a chart.
3. Ensure the app is running without any issues via `hatch run example visual-vocabulary`
4. List out the resources you've used in the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary)
5. Raise a PR
**Useful resources:**
- Lollipop: https://medium.com/@caiotaniguchi/plotting-lollipop-charts-with-plotly-8925d10a3795
- Data chart mastery: https://www.atlassian.com/data/charts/how-to-choose-data-visualization | 0easy
|
Title: [Feature request] Add apply_to_images to OverlayElements
Body: | 0easy
|
Title: [Detections] extend `from_transformers` with segmentation models support
Body: ### Description
Currently, Supervision only supports Transformers object detection models. Let's expand [`from_transformers`](https://github.com/roboflow/supervision/blob/781a064d8aa46e3875378ab6aba1dfdad8bc636c/supervision/detection/core.py#L391) by adding support for segmentation models.
### API
The code below should enable the annotation of an image with segmentation results.
```python
import torch
import supervision as sv
from PIL import Image
from transformers import DetrImageProcessor, DetrForSegmentation
processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50")
model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")
image = Image.open(<PATH TO IMAGE>)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
width, height = image.size
target_size = torch.tensor([[height, width]])
results = processor. post_process_segmentation(
outputs=outputs, target_sizes=target_size)[0]
detections = sv.Detections.from_transformers(results)
mask_annotator = sv.MaskAnnotator()
annotated_image = mask.annotate(scene=image, detections=detections)
```
### Additional
- [Transformers DETR Docs](https://huggingface.co/docs/transformers/en/model_doc/detr)
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻 | 0easy
|
Title: K8s部署教程
Body: ### 配置文件
```json
{
"accounts_urls": [
{
"mark": "账号标识,可以设置为空字符串",
"url": "账号主页链接",
"tab": "账号主页类型",
"earliest": "作品最早发布日期",
"latest": "作品最晚发布日期"
}
],
"mix_urls": [
{
"mark": "合集标识,可以设置为空字符串",
"url": "合集链接或者作品链接"
}
],
"owner_url": {
"mark": "账号标识,可以设置为空字符串",
"url": "账号主页链接"
},
"root": "",
"folder_name": "Download",
"name_format": "create_time type nickname desc",
"date_format": "%Y-%m-%d %H:%M:%S",
"split": "-",
"folder_mode": false,
"music": false,
"storage_format": "",
"cookie": "",
"dynamic_cover": false,
"original_cover": false,
"proxies": "",
"download": true,
"max_size": 0,
"chunk": 1048576,
"max_retry": 5,
"max_pages": 0,
"default_mode": "8",
"ffmpeg": ""
}
```
`default_mode: "8"`
### Dockerfile
```Dockerfile
FROM python:3.12
WORKDIR app
COPY ./src /app/src
COPY ./static /app/static
COPY ./templates /app/templates
COPY ./Dockerfile /app/Dockerfile
COPY ./main.py /app/main.py
COPY ./requirements.txt /app/requirements.txt
# COPY ./settings.json /app/settings.json 从容易映射进去
RUN sed -i 's@^.* if self.console.input(.*$@@' main.py &&\
sed -i 's@^.* "是否已仔细阅读上述免责声明(YES/NO): ").upper() != "YES":.*$@@' main.py &&\
sed -i 's@^.* return False.*$@@' main.py &&\
echo '/bin/bash' >> /entrypoint.sh &&\
echo 'cd /app' >> /entrypoint.sh &&\
echo 'echo 5 | /usr/local/bin/python main.py' >> /entrypoint.sh &&\
chmod +x /entrypoint.sh &&\
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -r requirements.txt
ENTRYPOINT ["/bin/bash", "/entrypoint.sh"]
```
### k8s配置
```yaml
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: tiktok
namespace: tool
labels:
app: tiktok
spec:
replicas: 1
selector:
matchLabels:
app: tiktok
template:
metadata:
labels:
app: tiktok
spec:
volumes:
- name: tiktok
persistentVolumeClaim:
claimName: tiktok
containers:
- name: tiktok
image: 'tiktok:v5.3'
imagePullPolicy: IfNotPresent
ports:
- name: tiktok
protocol: TCP
containerPort: 5000
volumeMounts:
- name: tiktok
mountPath: /app/Download
resources:
limits:
cpu: 200m
memory: 512Mi
requests:
cpu: 50m
memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
name: tiktok
namespace: tool
annotations:
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
traefik.ingress.kubernetes.io/service.sticky.cookie.name: "sticky"
traefik.ingress.kubernetes.io/service.sticky.cookie.secure: "true"
spec:
ports:
- name: http
protocol: TCP
port: 5000
selector:
app: tiktok
```
```yaml
# PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: tiktok
namespace: tool
spec:
storageClassName: "nfs-storage"
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
nfs:
path: /data/tiktok
server: 10.233.1.10
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tiktok
namespace: tool
spec:
storageClassName: "nfs-storage"
accessModes:
- ReadWriteMany
volumeName: "tiktok"
resources:
requests:
storage: 10Gi
```
```yaml
# IngressRoute
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: tiktok
namespace: tool
spec:
entryPoints:
- websecure
routes:
- match: Host(`tiktok.domain.com`) && PathPrefix(`/`)
kind: Rule
services:
- name: tiktok
port: 5000
tls:
certResolver: letsencrypt
```
### 附图

| 0easy
|
Title: Chat list scroll bar too short for small number of chats.
Body: Below is UI when theres only a few chats (pop up menu is difficult to use)
<img width="270" alt="image" src="https://github.com/LAION-AI/Open-Assistant/assets/61619422/5791e699-5b21-4a7b-a820-98e619279c85">
Below is UI when there are a lot of chats (pop up is menu easy to use)
<img width="272" alt="image" src="https://github.com/LAION-AI/Open-Assistant/assets/61619422/28606ee9-52cb-4f64-ba70-479b10d4551c">
I believe the best way to solve this must be providing the chat list some default height, preferably flexible. | 0easy
|
Title: Additional GeoJSON files
Body: Gmaps bundles useful GeoJSON geometries to save users the work of having to find them from the Internet. There are currently a [handful](https://github.com/pbugnion/gmaps/blob/master/gmaps/geojson_geometries/geojson_geometries.py) of geometries included. Adding new geometries is a very useful way to contribute to `gmaps`.
If you use gmaps with your own GeoJSON file, chances are that other people would also find it useful. Contributing the file back to gmaps is a meaningful way to contribute to the project.
**What makes a good GeoJSON file**
- GeoJSON files should be between 1MB and 3MB. If your file is larger than that, you can use [mapshaper](http://www.mapshaper.org) to simplify it.
- The license needs to match that of GMaps: the file should either be in the public domain, or it should be MIT or Apache licensed.
**How to contribute GeoJSON files**
If you have a way to share your file on the Internet (either by putting it on Amazon S3, or through a Dropbox link, or by putting it on a web server), then do this and add the link to the [geojson_geometries](https://github.com/pbugnion/gmaps/blob/master/gmaps/geojson_geometries/geojson_geometries.py)) module. We will probably copy the file onto the Amazon S3 bucket stored with gmaps and change the link that you submitted.
If you do not have a way to share the file, open an issue in GitHub and upload your file through that. | 0easy
|
Title: [SDK] Support More MCKind in `tune`
Body: ### What you would like to be added?
Support specifying more types of metrics collector in the `tune` function.
https://github.com/kubeflow/katib/blob/51b246fa1c23db130c373fa9749897a0d0b334ab/sdk/python/v1beta1/kubeflow/katib/api/katib_client.py#L189
### Why is this needed?
Currently, we only support `kind` param in `metrics_collector_config`, which means only `StdOut` and `Push` collector can be specified in `tune` function.
https://github.com/kubeflow/katib/blob/51b246fa1c23db130c373fa9749897a0d0b334ab/sdk/python/v1beta1/kubeflow/katib/api/katib_client.py#L387-L391
However, there are many other collector types, such as `File`, `Prometheus`, `Custom`, `TensorFlowEvent` collector. They are important too. The `tune` function would be more flexible and powerful if users can specify all these kind of collectors.
Currently we support:
- [x] Push Collector: https://github.com/kubeflow/katib/pull/2369
- [x] StdOut Collector: https://github.com/kubeflow/katib/pull/2369
- [ ] File Collector
- [ ] Prometheus Collector
- [ ] Custom Collector
- [ ] TensorFlowEvent Collector
### Love this feature?
Give it a 👍 We prioritize the features with most 👍 | 0easy
|
Title: Precise URL of watched notebook app
Body: Please display (in the terminal after running a command) to the user a precise URL address of watched notebook. | 0easy
|
Title: Add a function to find cells with the requested tags
Body: This is a simple, isolated issue.
We have a function to iterate over cells to find one with the given tag:
https://github.com/ploomber/ploomber/blob/3b7997d88a34f75855c0b5ec4391c82dce4213b6/src/ploomber/sources/nb_utils.py#L1
However, this function has to clean up all cells with the given cells:
https://github.com/ploomber/ploomber/blob/3b7997d88a34f75855c0b5ec4391c82dce4213b6/src/ploomber/sources/notebooksource.py#L556
We're currently iterating the notebook one per tag but it'd be better to search for all tags on a single pass.
For example, if my input is:
```python
tags = ['a', 'b', 'c']
```
After running the function, the output should return a dictionary with the tags and its corresponding index:
```python
# 'a' is in the first cell, etc...
result = {'a': 0, 'b': 10, 'c': 20}
```
Note that cells may have more than one tag, but as long as it contains the one we are looking for, we don't care about the rest.
If the cell is not in the notebook, it should not appear in the output. Following our previous example:
```python
# input
tags = ['a', 'b', 'c']
```
Output:
```python
# if we only get 'a', it means 'b' and 'c' are not in the notebook
result = {'a': 0}
```
[I recently wrote something like this](https://github.com/edublancas/jupyblog/blob/cf2063007915b1c2d9519cd365e494075c584059/src/jupyblog/md.py#L44), except it looks for lines in a text file, as opposed to cells in a notebook, but the general logic remains the same.
Tasks:
- [ ] Write a function that supports multiple tags
- [ ] Have `_cleanup_rendered_nb` use that new function | 0easy
|
Title: Rename `after_import_instance()` method for clarity
Body: The `after_import_instance()` is a misnomer because it is called after the instances is loaded (or created) and not after it is imported. Existing clients rely on this method so it would have to be deprecated correctly rather than simply renaming.
It would also be a good idea to pass the row in as a kwarg so that that users can use row data if they need to.
| 0easy
|
Title: Document third-party examples, blog posts, etc.
Body: Should add a section to the docs for third-party examples and guides using Mangum | 0easy
|
Title: ✨ Start Integrating Pydantic V2
Body: We can depend on this PR from @tiangolo to build something with Pydantic V2 in Authx - https://github.com/tiangolo/fastapi/pull/9500 | 0easy
|
Title: Specify hyperparameter ranges for blocks
Body: ### Feature Description
We want to enable the users to specify the value ranges for any argument in the blocks.
The following code example shows a typical use case.
The users can specify the number of units in a DenseBlock to be either 10 or 20.
### Code Example
<!---
Please provide a code example for using that feature
given the proposed feature is implemented.
-->
```python
import autokeras as ak
from kerastuner.engine.hyperparameters import Choice
input_node = ak.ImageInput()
output_node = ak.DenseBlock(num_units=Choice("num_units", [10, 20]))(input_node)
output_node = ak.ClassificationHead()(output_node)
model = ak.AutoModel(input_node, output_node)
```
### Note
<!---
Why do we need the feature?
-->
Each pull request should only change one hyperparameter in one of the blocks.
### Solution
<!---
Please tell us how to implement the feature,
if you have one in mind.
-->
Example pull requests are #1419 #1425 .
Here are the steps to follow:
1. You can just change any other argument in any other block supported by AutoKeras, as shown [here](https://github.com/keras-team/autokeras/blob/master/autokeras/__init__.py#L16-L38).
2. Change the docstring. [example](https://github.com/keras-team/autokeras/pull/1419/files#diff-7b757ddf51e45c0e02ad8de148d02fd2abd59d670f91d35cac71157e6684c13cL60-R61)
3. Make sure you imported the module. `from kerastuner.engine import hyperparameters`.
4. Change the typing of the argument. [example](https://github.com/keras-team/autokeras/pull/1419/files#diff-7b757ddf51e45c0e02ad8de148d02fd2abd59d670f91d35cac71157e6684c13cL73-R74)
5. Change the saving mechanism to serialized objects. [example](https://github.com/keras-team/autokeras/pull/1419/files#diff-7b757ddf51e45c0e02ad8de148d02fd2abd59d670f91d35cac71157e6684c13cL95-R100)
6. Change the loading mechanism to deserialized objects. [example](https://github.com/keras-team/autokeras/pull/1419/files#diff-7b757ddf51e45c0e02ad8de148d02fd2abd59d670f91d35cac71157e6684c13cR110)
7. Change how we initialize the hyperparameter to self. [example](https://github.com/keras-team/autokeras/pull/1419/files#diff-7b757ddf51e45c0e02ad8de148d02fd2abd59d670f91d35cac71157e6684c13cL80-R85) Copy from where it is originally defined. [example](https://github.com/keras-team/autokeras/pull/1419/files#diff-7b757ddf51e45c0e02ad8de148d02fd2abd59d670f91d35cac71157e6684c13cL115)
8. Change how we use it. [example](https://github.com/keras-team/autokeras/pull/1419/files#diff-7b757ddf51e45c0e02ad8de148d02fd2abd59d670f91d35cac71157e6684c13cL124-R129) | 0easy
|
Title: Anti Volume Stop Loss indicator.. requested.
Body: Hi,
I've read Buff Dormeier's 2011 "Investing w/ Volume Analysis" book at least 3 times, looking for hints in other parts of the book, but the calculation of his Anti-Volume Stop Loss still eludes me. On page 254 Buff says
```C
Length = Round (3+VPCI)
Price = Average (Lows x 1/ VPC x 1 / VPR, Length)
Standard Deviation = 2 x (VPCI X VM)
AVSL = Lower Bollinger Band - (Price, Length, Standard Deviation)
```
Same indicator made by rafka in trading view.
Can this be added to library? | 0easy
|
Title: [Roadmap] Heterogeneous Graphs Explainability Support
Body: ### 🚀 The feature, motivation and pitch
Explainability is a key feature of GNNs, which is already implemented in PyG. However, of all the features introduced, only a few have been adapted to heterogeneous graphs.
**Algorithms**: Of all the algorithms implemented for explainability, only the Captum algorithm is compatible with heterogeneous graphs. It would be interesting to adapt other specific algorithms for graphs such as ``GNNExplainer`` or ``PGExplainer``, and other algorithms such as ``AttentionExplainer``.
Moreover, the algorithms adapted by PyG could be extended with new algorithms that have been published over the years (for example, see this [survey](https://arxiv.org/abs/2306.01958)), but this isn't just for heterogeneous graphs. Maybe in the future, we could work on creating new algorithms simultaneously for heterogeneous graphs, without this gap.
- [ ] Adaptation of ``GNNExplainer`` for heterogeneous graphs.
- [ ] Adaptation of ``PGExplainer`` for heterogeneous graphs.
- [ ] Adaptation of ``AttentionExplainer`` for heterogeneous graphs.
- [ ] Implementation of new explainability algorithms. System to allow adaptation to heterogeneous graphs without additional work.
**Features**: Some features available in the explanations of homogeneous GNNs are missing in heterogeneous GNNs. For example, the ``visualize_graph`` method of ``Explanation`` is not available for ``HeteroExplanation``. This right now can be done with the ``get_explanation_subgraph`` method, and generating the plot by hand with NetworkX, but it would be nice to implement it to do it automatically.
- [ ] Implementation of missing features for HeteroExplanation, such as ``visualize_graph``.
**Metrics**: Currently, the available metrics such as Fidelity or Faithfulness are only available for homogeneous graphs, but those metrics could be adapted for heterogeneous graphs. To continue the work of #5628, we could think implementing new metrics for all kind of graphs, such as sparsity or stability (e.g. see https://arxiv.org/pdf/2012.15445.pdf)
- [ ] Adaptation of fidelity, faithfulness and characterization score for heterogeneous graphs.
- [ ] Sparsity metric for all graphs.
- [ ] Stability metric for all graphs.
### Alternatives
_No response_
### Additional context
_No response_ | 0easy
|
Title: replace listcomp with genexp in discretisation tests
Body: in test_equal_frequency_discretisation.py
in test_equal_width_discretisation.py
| 0easy
|
Title: Not all allure functions are available in dynamic package
Body: We have `allure.epic` but for some reason we do not have `allure.dynamic.epic`
This mistake should be fixed
#### I'm submitting a ...
- [ ] bug report
- [x] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
#### What is the expected behavior?
#### What is the motivation / use case for changing the behavior?
#### Please tell us about your environment:
- Allure version: 2.1.0
- Test framework: pytest@3.0
- Allure adaptor: allure-pytest@2.0.0b1
#### Other information
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
| 0easy
|
Title: [Feature request] Add apply_to_images to RandomRain
Body: | 0easy
|
Title: Add `joint` as parameter to `num_label_issues()` method
Body: Then you dont need to provide `pred_probs`. This is much more efficient. | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.