text
stringlengths
20
57.3k
labels
class label
4 classes
Title: Deprecate `InitSpider` Body: Following https://github.com/scrapy/scrapy/issues/1510#issuecomment-2708531295 If anybody actually needs it, it's just 7 lines of code.
0easy
Title: Customizable ordering of reports Body: ### Describe the problem Our project has about 100 translators each year. We want to follow up who makes a lot of translations to give them a shout out (or even swag) and to monitor our translations. When a language reaches 95% translated, it gets in to production. If it drops beneath 90% we give it a beta label, and at 80% we give it a alpha label. For the moment I run my own scripts to get stats and store them in some [markdown pages](https://github.com/ctlaltdieliet/mattermost-i18n-scoreboard/tree/main/pages). I used to translate on Pootle. As a translator you see your current ranking (week/month?/overall?) and it is a kind of gamification that encourages you to be the best translator of the project for that time frame. When you download a language file and upload it again and mark the strings as translated, the current stats take all the strings in account (even if you only made a few changes) and not the effective changes. ### Describe the solution you would like I'd like to get an overview of the top translators for certain time frames. I want to follow up how translations for each language evolve during time ### Describe alternatives you have considered Running your own scripts ### Screenshots _No response_ ### Additional context Thank you for being awesome!
0easy
Title: [Feature request] Return Mixing Data from MixUp Body: Right now we return `mix_coef`, but not that data that was used for mixing, which limits the application of the transform.
0easy
Title: `ELSE IF` condition not passed to listeners Body: RF 6.0 enhanced listeners so that more info about control structures is passed to them (#4335). Due to a bug in the code, a missing comma in a single item tuple, `ELSE IF` condition is missing. Easy to fix and need to add tests as well.
0easy
Title: Missing format string arguments Body: This assertion error string is not properly formatted as the 2 format arguments `y_pred.shape` and `y.shape` are missing: https://github.com/scikit-learn/scikit-learn/blob/551d56c254197c4b6ad63974d749824ed2c7bc58/sklearn/utils/estimator_checks.py#L2139 ```python assert y_pred.shape == y.shape, ( "The shape of the prediction for multioutput data is incorrect." " Expected {}, got {}." ) ``` should become ```python assert y_pred.shape == y.shape, ( "The shape of the prediction for multioutput data is incorrect." " Expected {}, got {}.".format(y_pred.shape, y.shape) ) ```
0easy
Title: DatePickerRange initiates callbacks at every selection, before window is closed. Body: Clicking on the dates on the DatePickerRange initiates callbacks at every click. For example, selecting the start date initiates callback, before one can finish selecting the end date. These callbacks have to all finish running and can take a long time if the user clicks around on the DatePickerRange.
0easy
Title: Positional-only argument containing `=` is considered named argument if keyword accepts `**named` Body: If a keyword accepts a positional only argument like ```python def example(arg, /): print(arg) ``` using it like ``` Example foo=bar Example arg=xxx ``` works fine and the keyword gets `foo=bar` and `arg=xxx` as an argument, respectively. The argument is positional-only so `=` has no special meaning. If the keyword also accepts free named arguments like ```python def example(arg, /, **named): print(arg, named) ``` using it like ``` Example foo=bar ``` fails like > Keyword 'example.Example' expected 1 non-named argument, got 0. This is due to a bug in argument resolving logic where all arguments containing `=` are considered named if the keyword accepts free named arguments. This needs to be fixed so that positional-only arguments are excluded. After the fix we can fix #4821 by changing the signature of the `Format String` keyword to `template, /, *positional, **named`. We also should check are there other keywords accepting `**named` that could get the same treatment.
0easy
Title: Add tests for all linux pack actions Body: The [contrib `linux` pack](https://github.com/StackStorm/st2/tree/master/contrib/linux) has a few broken actions (`check_loadavg`, `check_processes`, `dig`: #4993, `diag_loadavg`, `netstat`: #4947, possibly more) at least on CentOS 8. It's really disheartening to spin up StackStorm and immediately run into broken actions. We should add tests for those actions to the pack. The [tests](https://github.com/StackStorm/st2/tree/master/contrib/core/tests) from the [`core`](https://github.com/StackStorm/st2/tree/master/contrib/core) pack can be used as a template for writing tests for the `linux` pack. I will try to add tests for the `dig` action in my PR #4993, to serve as a concrete example, but adding tests would be a useful way to introduce somebody to developing StackStorm that would have an immediate impact.
0easy
Title: Add a method to check if a document is already index Body: # Context We should add a method in `BaseDocIndex` to check if a document is already indexed in the database. In short what it would do is to check the `id` of the given document and check if it already exist in the database.
0easy
Title: strawberry_django_plus.filters.filter makes fields required by default?! Body: When using `strawberry_django_plus.filters.filter` instead of `strawberry_django.filter`, the annotated fields appear to be required, i.e. activated by default in GraphiQL, and using `deprecation_reason` causes the schema to validate: > ❌ Required input field FooFilter.bar_status cannot be deprecated. ```python @filter(models.Foo) class FooFilter: bar_status: str = gql.django.field(deprecation_reason="use … instead") ``` I've seen that `| None` can be used with the annotation to make it optional, but I think it should be optional by default. code ref: https://github.com/blb-ventures/strawberry-django-plus/blob/273d906ac7191f5eb073e1bf18b96cd1268466f9/strawberry_django_plus/filters.py#L65-L92
0easy
Title: Can Autosklearn handle Multi-Class/Multi-Label Classification and which classifiers will it use? Body: I have been trying to use AutoSklearn with Multi-class classification so my labels are like this 0 1 2 3 4 ... 200 1 0 1 1 1 ... 1 0 1 0 0 1 ... 0 1 0 0 1 0 ... 0 1 1 0 1 0 ... 1 0 1 1 0 1 ... 0 1 1 1 0 0 ... 1 1 0 1 0 1 ... 0 I used this code ```python y = y[:, (65,67,54,133,122,63,102,105,39)] X = df.drop(Code, axis=1, errors='ignore') X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) automl = autosklearn.classification.AutoSklearnClassifier( include={'feature_preprocessor': ["no_preprocessing"], }, exclude={ 'classifier': ['random_forest']}, time_left_for_this_task=60*5, per_run_time_limit=60*1, memory_limit = 1024 * 10, n_jobs=-1, metric=autosklearn.metrics.f1_macro, ) ``` but now I want to train Autosklearn on Multi-class Multi-label classification Which method of these shall i use? **1-** ``` clf = OneVsRestClassifier(automl, n_jobs=-1) clf.fit(X_train, y_train) ``` **2-** ``` clf = automl clf.fit(X_train, y_train) ``` **3-** I have to loop one class at a time and use ``` clf = automl clf.fit(X_train, y_train) ``` so it will be like ``` for i in (65,67,54,133,122,63,102,105,39): y = z[:, i] X = df.drop(Code, axis=1, errors='ignore') X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) automl = autosklearn.classification.AutoSklearnClassifier( include={'feature_preprocessor': ["no_preprocessing"], }, exclude={ 'classifier': ['random_forest']}, time_left_for_this_task=60*5, per_run_time_limit=60*1, memory_limit = 1024 * 10, n_jobs=1, metric=autosklearn.metrics.f1_macro, ) clf = automl clf.fit(X_train, y_train) ``` so I get a different model for each label?
0easy
Title: [Examples] Fix docstring of to_screenshot.py and export_figure.py Body: ## 🧰 Task As identified in https://github.com/napari/napari/pull/7319 the docstrings (which end up being the copy in the napari.org gallery example) are not correct for the screenshot and export_figure examples. See: <img width="891" alt="image" src="https://github.com/user-attachments/assets/12e32f4f-bd04-46e7-af68-2427a7a2c806"> Looks like it's placeholder text that wasn't updated. to_screenshot: https://github.com/napari/napari/blob/d9fd074601d4bac7ff8ff6f0db1338d3301726e5/examples/to_screenshot.py#L1-L10 export_figure https://github.com/napari/napari/blob/d9fd074601d4bac7ff8ff6f0db1338d3301726e5/examples/export_figure.py#L1-L10
0easy
Title: Make html report pretty Body: πŸ’„ - [ ] Show only path_url in the titles - [ ] Add response and request headers - [ ] Add request headers and body - [ ] Add response headers, metada and body - [ ] Add cURL - [ ] Add response time
0easy
Title: [Doc] Document for soft delete feature Body: Needing documentation for soft delete recently added to beanie.
0easy
Title: Add IsQuarterEnd primitive Body: - This primitive determine the `is_quarter_end` of a datetime column
0easy
Title: cant hide sidebar in watch mode Body: Use cant hide the sidebar in the `watch` mode.
0easy
Title: Update ClassicalSimulator to use simulation infrastructure Body: Update Classical Simulator to use https://github.com/quantumlib/Cirq/pull/5417 --- **What is the urgency from your perspective for this issue? Is it blocking important work?** P2 - we should do it in the next couple of quarters
0easy
Title: Add caching to Github actions for faster builds Body: Builds are painfully slow right now, maybe the docker images can be cached?
0easy
Title: [Feature] generate till it reaches model's context length Body: ### Checklist - [ ] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed. - [ ] 2. Please use English, otherwise it will be closed. ### Motivation By default, sglang uses 128 tokens as the max_new_tokens even if None is given. Is there a way to specify that the model generated till it reaches it max model length? I have a different sequences of generation using sglang frontend language, so I cant keep track of the input lengths etc. I cant use one single max_new_tokens for different generations. If I put the max_new_tokens hardcoded, the input length is being restricted. Let me know if I can help implemenatation of such a feature, it would be very helpful. I am thinking of an implementation that takes the current input token length and mas token length to calculate max_new_tokens? Please let me know if this is valid concern. ### Related resources _No response_
0easy
Title: Get latest disqus comments and show in admin dashboard Body:
0easy
Title: Code Changes Commits metric API Body: The canonical definition is here: https://chaoss.community/?p=4707
0easy
Title: [DOCS] Explain matplotlib get_cmap function Body: Would it make sense to implement a `get_cmap` function as `colorcet.cm.get_cmap` that could override Matplotlib's so that external libraries using MPL's `get_cmap` could just replace it with Colorcet's `get_cmap` if Colorcet is available at runtime? This would make interfaceing with Colorcet super easy for other libraries already leveraging MPL's colormaps. I'm thinking that this function would have the same arguments and behavior as MPL's `get_cmap` but it would also grab Colorcet colormaps by name and default to Colorcet if MPL has a similarly named colormap.
0easy
Title: Possibility to give a custom name to a suite using `Name` setting Body: Suite names are got from file or directory names by default so that, for example, `example_suite.robot` creates a suite `Example Suite`. This works fine in general, but makes it inconvenient or even impossible to use special characters like `!` or `_`. Longer suite names can also be inconvenient as file/directory names. An easy solution to allow using whatever names is adding a new `Name`. The name would still be set based on the file/directory name by default, but this new setting would allow overriding it. In practice the new setting will look like this: ```robotframework *** Settings *** Name Custom name! ``` Being able to set a custom name for suites like this would make issue #4015 more powerful. Without this you needed to use the `--name` option in addition to `__init__.robot` files to be able to fully configure the virtual top level suite created when executing multiple files/directories. This change only affects parsing and is fairly straightforward. This is a good issue for anyone interested to get more familiar with Robot's parser!
0easy
Title: Use Annotated to Indexed to complete type hints Body: ### Discussed in https://github.com/roman-right/beanie/discussions/637 <div type='discussions-op-text'> <sup>Originally posted by **hudrazine** August 4, 2023</sup> If you define `Indexed` as a type hint for a field, the type checker will complain : ```python class Task(Document): title: Indexed(str) # I get complaints from pylance class Settings: name = "inbox_tasks" ``` This is due to the use of a direct call expression in the type. Also, the type checker cannot parse this as a specific type (in this case, str), so the type hint is not reflected in the IDE either. This way, they won't complain : ```python IndexedTitle = Indexed(str) # Define outside class Task(Document): title: IndexedTitle # pylance also smiles ... ``` But the IDE does not reflect the correct type hints; above all, this is not smart. So, use [typing.Annotated](https://docs.python.org/ja/3/library/typing.html#typing.Annotated) : ```python class Task(Document): title: Annotated[str, Indexed(str)] # or like this title: Annotated[str, Indexed()] title: Annotated[str, Indexed(index_type=pymongo.TEXT)] ... ``` In this way, metadata of indexing can be added to the field while providing type hints. In the current `beanie`(1.21.0), the type hint works, but it is not marked as Indexed. I think this method is well proven and has been incorporated in Pydantic and FastAPI, do you think it would be beneficial for `beanie`?</div>
0easy
Title: Drop Python 3.7 support Body: Python 3.7 is now end of life - https://devguide.python.org/versions/#unsupported-versions We should do the following: - [x] Update release.yaml to use a more modern Python version https://github.com/piccolo-orm/piccolo/blob/36e80f58b87822244e405736be141e68b5ae7fa4/.github/workflows/release.yaml#L17 - [x] Update our [CI scripts](https://github.com/piccolo-orm/piccolo/blob/master/.github/workflows/tests.yaml) - [x] Remove 3.7 from `python-version: ["3.7", "3.8", "3.9", "3.10", "3.11"]` - [x] Update `if: matrix.python-version == '3.7'` to `if: matrix.python-version == '3.11'`. - [x] Update `setup.py` - [x] `python_requires=">=3.7.0"` -> `python_requires=">=3.8.0"` https://github.com/piccolo-orm/piccolo/blob/36e80f58b87822244e405736be141e68b5ae7fa4/setup.py#L61 - [x] And drop this line: https://github.com/piccolo-orm/piccolo/blob/36e80f58b87822244e405736be141e68b5ae7fa4/setup.py#L87 - [x] Wherever we use `from typing_extensions import Literal` or `from typing_extensions import Protocol` we can now import them directly from the `typing` module instead. `typing_extensions` can then be moved from `requirements.txt` to `test-requirements.txt`. The reason for dropping older versions is it makes our CI run faster, and we can adopt newer Python features without having to worry about backwards compatibility.
0easy
Title: Custom text mime types Body: There is no longer the possibility to configure which type of content should not be base64 encoded. I need to ignore the "application/vnd.oai.openapi" format which is used by redoc. Also the doc is not up to date as it is stated that it is still possible to pass TEXT_MIME_TYPES.
0easy
Title: Implement Dataset Entry for Datasets Body: We have a new `DatasetEntry` class which helps us to generalize over datasets and enforce a common formatting. We need to implement this class for a couple of more datasets: - [ ] SODA (@hardikyagnik) - [ ] JokeExplanation (@sampatkalyan, needs approval from CodeOwner) - [ ] WebGPT (@CloseChoice, needs review) - [x] Alpaca (@CloseChoice - [x] AlpacaGpt4 (@CloseChoice) All of these datasets are found [here](https://github.com/LAION-AI/Open-Assistant/blob/main/model/model_training/custom_datasets/qa_datasets.py). An example of how the new class is implemented is found [here](https://github.com/LAION-AI/Open-Assistant/blob/main/model/model_training/custom_datasets/qa_datasets.py). PRs can be implementations for a SINGLE Dataset. If you implement this, then checking the dataset with our script would be nice (but optional): ```bash python check_dataset_appearances.py -d <dataset-name> --cache_dir <cache-dir> --mode sft ``` If you take up one of the datasets, please mention the dataset name in your comment, then I'll mention you at the corresponding dataset
0easy
Title: Gridlines automatically created in matplotlib and seaborn plots [BUG] Body: Darts package seems to have some parameter under the hood that inserts gridlines into every matplotlib and seaborn plot: If i do not import darts, this seaborn heatmap doesnt show gridlines, ``` import seaborn as sns import numpy as np from lib.data_ingestion.general_utils import pickle_file, load_pickle_file combined_park_df = load_pickle_file(f'pickle_files/balti2_preprocessed_df.pkl') overall_corr = combined_park_df[['edm_power_feed_in', 'edm_p_mw', 'power_diff']].corr() matrix = np.triu(overall_corr) sns.heatmap(overall_corr, annot = True) ``` ![image](https://github.com/unit8co/darts/assets/59031005/4f7799c3-5c91-44fd-be69-532d9563a9b1) However, if I include darts import as part of the imports, gridlines will appear in the plot: ``` import seaborn as sns import numpy as np from lib.data_ingestion.general_utils import pickle_file, load_pickle_file import darts combined_park_df = load_pickle_file(f'pickle_files/balti2_preprocessed_df.pkl') overall_corr = combined_park_df[['edm_power_feed_in', 'edm_p_mw', 'power_diff']].corr() matrix = np.triu(overall_corr) sns.heatmap(overall_corr, annot = True) ``` ![image](https://github.com/unit8co/darts/assets/59031005/e14e8963-f224-4d5b-b5fb-de976d20b9ee) Is there a reason why it's implemented this way?
0easy
Title: Experiment stuck due to hyperparameter suggestion pod gettin OOM Killed Body: /kind bug **What steps did you take and what happened:** Every time I try to run an experiment (in this case using Bayesian Optimization) after 18-25 trials, the pod that schedules the trials with suggested hyperparameters gets OOM Killed and the experiment does not continue. If you manually increase the memory resources and limits of the deployment, it works like a charm. **What did you expect to happen:** It should either take less memory and work with this kind of numbers of trials without any problem, or more easily just increase the default limit of the suggestion pod memory to a more realistic number. **Anything else you would like to add:** The experiment is the following: ```yaml metadata: name: [REDACTED] namespace: [REDACTED] uid: ee155028-e668-4ce3-a737-87e376268b43 resourceVersion: '68982612' generation: 2 creationTimestamp: '2024-02-09T09:17:02Z' finalizers: - update-prometheus-metrics managedFields: - manager: Go-http-client operation: Update apiVersion: kubeflow.org/v1beta1 time: '2024-02-09T09:17:02Z' fieldsType: FieldsV1 fieldsV1: f:metadata: ... - manager: kubectl-edit operation: Update apiVersion: kubeflow.org/v1beta1 time: '2024-02-19T08:13:38Z' fieldsType: FieldsV1 fieldsV1: f:spec: f:parallelTrialCount: {} - manager: Go-http-client operation: Update apiVersion: kubeflow.org/v1beta1 time: '2024-02-19T09:06:05Z' fieldsType: FieldsV1 fieldsV1: ... subresource: status spec: parameters: - name: batch_size parameterType: discrete feasibleSpace: list: - '32' - '64' - '128' - '256' - name: fnok_weight parameterType: discrete feasibleSpace: list: - '2.0' - '2.8' - '3.2' - name: nok_weight parameterType: discrete feasibleSpace: list: - '2.0' - '2.8' - '3.2' - name: epochs parameterType: discrete feasibleSpace: list: - '30' - '50' - '100' - '150' - '200' - '250' - name: model parameterType: categorical feasibleSpace: list: - [REDACTED] - [REDACTED] - name: base_lr parameterType: discrete feasibleSpace: list: - '0.001' - '0.0001' - name: fraction parameterType: discrete feasibleSpace: list: - '0.05' - '0.1' - '0.25' - '0.5' - name: trainable_layers parameterType: categorical feasibleSpace: list: - dense_12 - dense_12;dense_11 - dense_12;dense_11;dense_10;dense_9 - dense_12;dense_11;dense_10;dense_9;conv2d_12;conv2d_11 - all objective: type: maximize objectiveMetricName: test_f2 additionalMetricNames: - test_recall - test_specificity - test_accuracy - test_precision - model_runid metricStrategies: - name: test_f2 value: max - name: test_recall value: max - name: test_specificity value: max - name: test_accuracy value: max - name: test_precision value: max - name: model_runid value: latest algorithm: algorithmName: bayesianoptimization algorithmSettings: - name: base_estimator value: GP - name: n_initial_points value: '10' - name: acq_func value: gp_hedge - name: acq_optimizer value: auto - name: random_state value: '12' trialTemplate: retain: true trialSpec: apiVersion: batch/v1 kind: Job spec: backoffLimit: 1 template: metadata: annotations: sidecar.istio.io/inject: 'false' spec: containers: - command: - python3 - '-u' - scriptKubetrain.py - '--batch_size=${trialParameters.batch_size}' - '--epochs=${trialParameters.epochs}' - '--model=${trialParameters.model}' - '--base_lr=${trialParameters.base_lr}' - '--fnok_weight=${trialParameters.fnok_weight}' - '--nok_weight=${trialParameters.nok_weight}' - '--fraction=${trialParameters.fraction}' - '--referencias=" [REDACTED]"' - '--images_folder=/rx/L4' - '--patches_folders=/data/patches/ALL/ [REDACTED]/' - '--in_memory=False' - '--save_folder=/data/models/ [REDACTED]/' - '--model_name= [REDACTED]' - '--trainable_layers="${trialParameters.trainable_layers}"' image: docker-registry:5000/ [REDACTED] imagePullPolicy: Always name: training-container resources: limits: nvidia.com/mig-2g.10gb: 1 volumeMounts: - mountPath: /data name: mlops - mountPath: /rx name: rx-mount imagePullSecrets: - name: registry-access-secret restartPolicy: Never volumes: - name: mlops nfs: path: /mlops server: nfs-srv - flexVolume: driver: fstab/cifs fsType: cifs options: mountOptions: iocharset=utf8,file_mode=0777,dir_mode=0777,noperm networkPath: [REDACTED] secretRef: name: cifs-creds name: rx-mount trialParameters: - name: batch_size reference: batch_size - name: epochs reference: epochs - name: model reference: model - name: base_lr reference: base_lr - name: fraction reference: fraction - name: fnok_weight reference: fnok_weight - name: nok_weight reference: nok_weight - name: trainable_layers reference: trainable_layers primaryContainerName: training-container successCondition: status.conditions.#(type=="Complete")#|#(status=="True")# failureCondition: status.conditions.#(type=="Failed")#|#(status=="True")# parallelTrialCount: 4 maxTrialCount: 1000 maxFailedTrialCount: 10 metricsCollectorSpec: collector: kind: StdOut resumePolicy: LongRunning status: startTime: '2024-02-09T09:16:41Z' conditions: - type: Created status: 'True' reason: ExperimentCreated message: Experiment is created lastUpdateTime: '2024-02-09T09:16:41Z' lastTransitionTime: '2024-02-09T09:16:41Z' - type: Running status: 'True' reason: ExperimentRunning message: Experiment is running lastUpdateTime: '2024-02-09T09:17:24Z' lastTransitionTime: '2024-02-09T09:17:24Z' currentOptimalTrial: ... ``` **Environment:** - Katib version (check the Katib controller image version): v0.16.0-rc.1 - Kubernetes version: (`kubectl version`): Server Version: v1.25.9 - OS (`uname -a`): Linux 6.5.0-15-generic --- <!-- Don't delete this message to encourage users to support your issue! --> Impacted by this bug? Give it a πŸ‘ We prioritize the issues with the most πŸ‘
0easy
Title: `history pull` - pull the command history from parallel sessions to the current session Body: `history pull` command was implemented in [xontrib-rc-awesome](https://github.com/anki-code/xontrib-rc-awesome) for sqlite history backend - [history-pull](https://github.com/anki-code/xontrib-rc-awesome/blob/db11c3dadb861dc594e069f5b12d3ac34ed78af5/xontrib/rc_awesome.xsh#L218-L248). We need: * add `pull` method to [History](https://github.com/xonsh/xonsh/blob/bba684e0bdf182ccb898f28d0e3e63c717df61e6/xonsh/history/base.py) * add the implementation of `pull` method to [SqliteHistory](https://github.com/xonsh/xonsh/blob/bba684e0bdf182ccb898f28d0e3e63c717df61e6/xonsh/history/sqlite.py#L251) * [optional] add the same logic to other history backends ## For community ⬇️ **Please click the πŸ‘ reaction instead of leaving a `+1` or πŸ‘ comment**
0easy
Title: Addition: SimplifiedRegressionDashbaord Body: The default dashboard can be very overwhelming with lots of tabs, toggles and dropdowns. It would be nice to offer a simplified version. This can be built as a custom ExplainerComponent and included in custom, so that you could e.g.: ``` from explainerdashboard import RegressionExplainer, ExplainerDashboard from explainerdashboard.custom import SimplifiedRegressionDashboard explainer = RegressionExplainer(model, X, y) ExplainerDashboard(explainer, SimplifiedRegressionDashboard).run() ``` It should probably include at least: predicted vs actual plot Shap importances Shap dependence Shap contributions graph And ideally would add in some dash_bootstrap_components sugar to make it look extra nice, plus perhaps some extra information on how to interpret the various graphs.
0easy
Title: Rounding error leads to bad display of status color bar Body: Observed on RF7. When I have: 1 successful test, 4 failed and 1 skipped; the report badly rounds the proportion of each status, ending up in this weird display: ![image](https://github.com/user-attachments/assets/1e8d74d8-52c4-495f-a09e-13265db7d9e0) Here is the HTML element: ![image](https://github.com/user-attachments/assets/3877b2eb-f296-4d85-8bae-4856deee6645) The same issue is present on both report and log files. It can be reproduced on my own installation and online playground. It seems caused by a bad rounding of 1/6, which is slightly less than 16.7%, so that 1/6+4/6+1/6 = 100.1% when 1/6 is rounded to 16.7%. Interestingly enough, the report is produced properly in case of 4 success, 1 fail and 1 skip. Here is the code to reproduce: ```robot *** Test Cases *** Test 1 Pass Execution message Test 2 Fail Test 3 Fail Test 4 Fail Test 5 Fail Test 6 Skip ```
0easy
Title: --backend-store-uri error Body: ### Summary Hi, When I run the command below to use a remote postgres database as metadata storage, I got errors. Could you help me resolve this? ``` $mlflow server --host 0.0.0.0 --port 7031 --backend-store-uri postgresql://test_name:test_pass@localhost:5432/mlflow_db 2025/02/24 07:59:23 ERROR mlflow.cli: Error initializing backend store 2025/02/24 07:59:23 ERROR mlflow.cli: Can't replace canonical symbol for '__firstlineno__' with new int value 615 Traceback (most recent call last): File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/mlflow/cli.py", line 426, in server initialize_backend_stores(backend_store_uri, registry_store_uri, default_artifact_root) ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/mlflow/server/handlers.py", line 369, in initialize_backend_stores _get_tracking_store(backend_store_uri, default_artifact_root) ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/mlflow/server/handlers.py", line 346, in _get_tracking_store _tracking_store = _tracking_store_registry.get_store(store_uri, artifact_root) File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/mlflow/tracking/_tracking_service/registry.py", line 45, in get_store return self._get_store_with_resolved_uri(resolved_store_uri, artifact_uri) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/mlflow/tracking/_tracking_service/registry.py", line 56, in _get_store_with_resolved_uri return builder(store_uri=resolved_store_uri, artifact_uri=artifact_uri) File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/mlflow/server/handlers.py", line 165, in _get_sqlalchemy_store from mlflow.store.tracking.sqlalchemy_store import SqlAlchemyStore File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/mlflow/store/tracking/sqlalchemy_store.py", line 11, in <module> import sqlalchemy File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/sqlalchemy/__init__.py", line 13, in <module> from .engine import AdaptedConnection as AdaptedConnection File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/sqlalchemy/engine/__init__.py", line 18, in <module> from . import events as events File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/sqlalchemy/engine/events.py", line 19, in <module> from .base import Connection File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 30, in <module> from .interfaces import BindTyping File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/sqlalchemy/engine/interfaces.py", line 38, in <module> from ..sql.compiler import Compiled as Compiled File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/sqlalchemy/sql/__init__.py", line 14, in <module> from .compiler import COLLECT_CARTESIAN_PRODUCTS as COLLECT_CARTESIAN_PRODUCTS File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/sqlalchemy/sql/compiler.py", line 615, in <module> class InsertmanyvaluesSentinelOpts(FastIntFlag): ...<14 lines>... RENDER_SELECT_COL_CASTS = 64 File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/sqlalchemy/util/langhelpers.py", line 1663, in __init__ sym = symbol(k, canonical=v) File "~/miniconda3/envs/model_registry_v1/lib/python3.13/site-packages/sqlalchemy/util/langhelpers.py", line 1635, in __new__ raise TypeError( ...<2 lines>... ) TypeError: Can't replace canonical symbol for '__firstlineno__' with new int value 615 ``` Thanks ### Notes - Make sure to open a PR from a **non-master** branch. - Sign off the commit using the `-s` flag when making a commit: ```sh git commit -s -m "..." # ^^ make sure to use this ``` - Include `#{issue_number}` (e.g. `#123`) in the PR description when opening a PR.
0easy
Title: Remove deprecated scrapy.downloadermiddlewares.decompression Body: Deprecated in 2.7.0
0easy
Title: add ploomber cloud {cmd} cli suggestions Body: When a user types an incorrect command: ``` ploomber {cmd} ``` We print a "did you mean {another}?" if there's a close match, but this functionality does not work with nested commands (e.g. `ploomber cloud`)
0easy
Title: feature: extend Response API Body: We should specify concrete broker Response classes (like [KafkaResponse](https://github.com/airtai/faststream/blob/main/faststream/kafka/response.py#L4)) to make them supporting all broker features ([Publisher.publish](https://github.com/airtai/faststream/blob/main/faststream/kafka/publisher/usecase.py#L100) options) Another words, **KafkaResponse** should has the following signature: ```python class KafkaResponse(Response): def __init__( self, body: "SendableMessage", *, headers: Optional["AnyDict"] = None, correlation_id: Optional[str] = None, # Kafka specific timestamp_ms: Optional[int] = None, key: Optional[bytes] = None, ... ) -> None: ... ``` Another brokers Response classes should be extended the same way
0easy
Title: [new] `find_value(arr, value)` Body: returns 0 indexed index of value in array. If not found, return null get inspired from https://stackoverflow.com/questions/70885708/how-to-find-the-index-of-an-element-in-an-array-in-bigquery
0easy
Title: [DOC] AutoREG docstring includes non-existent arguments Body: #### Describe the issue linked to the documentation The [`AutoREG` documentation](https://www.sktime.net/en/latest/api_reference/auto_generated/sktime.forecasting.auto_reg.AutoREG.html) includes arguments such as `endog` and `exog` that are not present in the model. They should be removed from the docstrings.
0easy
Title: Provide user-facing functionality for fixing serialization/deserialization of custom scalars Body: When parsing the textual representation of a GraphQL schema into a schema object, custom scalars' serialization and deserialization code are not picked up since they are not part of the textual representation. This causes issues when working with custom scalar objects. #388 fixes this problem for our test suite. We need to expose a similar mechanism that users of the package can use to resolve this problem for their use cases.
0easy
Title: Test with the oldest version of pytest and Memray we support in CI Body: Currently we only test with the newest version of both pytest and Memray, but we declare support for older versions. We should test that pytest-memray really does work with the oldest versions of pytest and Memray that we claim to support.
0easy
Title: Failing tests in DocStrings Body: Checking the Python files in NLTK with "python -m doctest" reveals that many tests are failing. In many cases, the failures are just cosmetic discrepancies between the expected and the actual output, such as missing a blank line, or unescaped linebreaks. Other cases may be real bugs. If these failures could be avoided, it would become possible to improve CI by running "python -m doctest" each time a python file was modified. Here is an updated list, after applying #2990, and unzipping those corpora marked unzip="1": > nltk/parse/corenlp.py: 9 passed and 33 failed. nltk/parse/stanford.py: 1 passed and 16 failed. nltk/corpus/reader/framenet.py: 79 passed and 9 failed. nltk/tokenize/regexp.py: 14 passed and 7 failed. nltk/draw/table.py: 0 passed and 6 failed. nltk/tokenize/simple.py: 7 passed and 6 failed. nltk/text.py: 16 passed and 5 failed. nltk/tokenize/stanford_segmenter.py: 3 passed and 5 failed. nltk/draw/util.py: 1 passed and 4 failed. nltk/tag/hunpos.py: 1 passed and 4 failed. nltk/tokenize/__init__.py: 6 passed and 4 failed. nltk/translate/ibm1.py: 12 passed and 4 failed. nltk/tag/brill_trainer.py: 23 passed and 3 failed. nltk/tokenize/destructive.py: 8 passed and 3 failed. nltk/tokenize/treebank.py: 32 passed and 3 failed. nltk/tokenize/util.py: 27 passed and 3 failed. nltk/translate/meteor_score.py: 8 passed and 3 failed. nltk/corpus/reader/categorized_sents.py: 5 passed and 2 failed. nltk/corpus/reader/opinion_lexicon.py: 3 passed and 2 failed. nltk/corpus/reader/reviews.py: 8 passed and 2 failed. nltk/metrics/agreement.py: 8 passed and 2 failed. nltk/parse/dependencygraph.py: 13 passed and 2 failed. nltk/parse/evaluate.py: 6 passed and 2 failed. nltk/parse/util.py: 5 passed and 2 failed. nltk/tag/stanford.py: 2 passed and 2 failed. nltk/tokenize/stanford.py: 3 passed and 2 failed. nltk/corpus/__init__.py: 1 passed and 1 failed. nltk/corpus/reader/comparative_sents.py: 5 passed and 1 failed. nltk/corpus/reader/__init__.py: 1 passed and 1 failed. nltk/corpus/reader/pros_cons.py: 2 passed and 1 failed. nltk/probability.py: 39 passed and 1 failed. nltk/sentiment/util.py: 7 passed and 1 failed. nltk/stem/arlstem2.py: 3 passed and 1 failed. nltk/stem/rslp.py: 3 passed and 1 failed. nltk/stem/snowball.py: 6 passed and 1 failed. nltk/tag/__init__.py: 12 passed and 1 failed. nltk/tokenize/casual.py: 9 passed and 1 failed. nltk/translate/ibm2.py: 18 passed and 1 failed. nltk/translate/ibm3.py: 23 passed and 1 failed. nltk/translate/ibm4.py: 25 passed and 1 failed. nltk/translate/ibm5.py: 22 passed and 1 failed.
0easy
Title: Occasional doc build failure `example_debug_logging.py` Body: Occasional issue building docs with `example_single_configuration.py` Error: ``` generating gallery for examples/40_advanced... [ 50%] example_debug_logging.py Warning, treated as error: /home/runner/work/auto-sklearn/auto-sklearn/examples/40_advanced/example_single_configuration.py failed to execute correctly: Traceback (most recent call last): File "/home/runner/work/auto-sklearn/auto-sklearn/examples/40_advanced/example_single_configuration.py", line 77, in <module> print(pipeline.named_steps) AttributeError: 'NoneType' object has no attribute 'named_steps' ``` [Test log](https://github.com/automl/auto-sklearn/pull/1229/checks?check_run_id=3414079944)
0easy
Title: Add StandardScaler Estimator Body: The StandardScaler estimator scales the data to zero mean and unit variance. Use the IncrementalBasicStatistics estimator to generate the mean and variance to scale the data. Investigate where the new implementation may be low performance and include guards in the code to use Scikit-learn as necessary. The final deliverable would be to add this estimator to the 'spmd' interfaces which are effective on MPI-enabled supercomputers, this will use the underlying MPI-enabled mean and variance calculators in IncrementalBasicStatistics. This is an easy difficulty project, and would be a medium time commitment when combined with other pre-processing projects. https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html#sklearn.preprocessing.StandardScaler
0easy
Title: Setting the seed when instantiating the model Body: ### πŸš€ The feature It would be nice to have something like that: ```py df = SmartDataframe( df, config={ "llm": llm, "verbose": True, }, seed=26 ) ``` ### Motivation, pitch The seed being set in the model of the servers of OpenAI or AzureOpenAI, would allow some deterministic features. For now my users don't understand why the module gives different answers with the same question (exact same token) when re-logging on the app. ### Alternatives _No response_ ### Additional context _No response_
0easy
Title: Add unit tests for File Metrics Collector Body: /kind feature We should add unit test for our [File Metrics Collector](https://github.com/kubeflow/katib/tree/master/pkg/metricscollector/v1beta1/file-metricscollector) to verify log parsing. That will help to avoid issues similar to: https://github.com/kubeflow/katib/issues/1754 /help /good-first-issue --- <!-- Don't delete this message to encourage users to support your issue! --> Love this feature? Give it a πŸ‘ We prioritize the features with the most πŸ‘
0easy
Title: [Optimizer] Move AdamW in GluonNLP to MXNet Body: ## Description Move the adamw optimizer defined here to MXNet. https://github.com/dmlc/gluon-nlp/blob/20321598a732d254d4a9e036eecbe0ae8abb48a0/src/gluonnlp/optimizer.py#L31-L32
0easy
Title: Allow to customize task names when using grid Body: This is related to #647 - we already allow custom product paths, but it is not possible to customize product paths
0easy
Title: Deprecate unused scrapy.utils code Body: As far as I can see the following functions and classes from `scrapy.utils` are not used anywhere (usually because the code using them is gone): * `scrapy.utils.python.flatten()` * `scrapy.utils.python.iflatten()` * `scrapy.utils.python.equal_attributes()` * `scrapy.utils.request.request_authenticate()` * `scrapy.utils.serialize.ScrapyJSONDecoder` * `scrapy.utils.test.assert_samelines()` `flatten()` and `iflatten()` are also present in Parsel where they are used but I wouldn't recommend switching to those (and they are easy to reimplement anyway).
0easy
Title: Boolean attributes in button: disabled Body: Hello. I'm trying to use [the `disabled` attribute](https://www.w3schools.com/Tags/att_button_disabled.asp) in the button element. Firstly, I've noticed that its type is String instead of Boolean. Secondly, I've trying to set the prop disabled='disabled', but it does not work; and it is still enabled. Other props of button with expected similar behaviour (boolean props) are: autoFocus, contentEditable, draggable, hidden, spellCheck, etc.
0easy
Title: [FEA] pylance/pyright Body: **Is your feature request related to a problem? Please describe.** MyPy is frustratingly slow, it'd be great to try something supposedly faster like pylance/pyright Depending on success, we may want to move to that for dev and only run mypy for ci To get started, just some sort of bin script, and a round of basic fix attempts, would go far!
0easy
Title: unexpected output for HF model with matching Body: ## πŸ› Bug <!-- A clear and concise description of what the bug is. --> ### To Reproduce without batching all works as expected ``` {'output': 'What is the capital of Greece?\n\nAthens.'} ``` but with batch, it returns just the first character ``` {'output': 'W'} ``` #### Code sample ```py import torch import litserve as ls from transformers import AutoTokenizer, AutoModelForCausalLM class JambaLitAPI(ls.LitAPI): def __init__( self, model_name: str = "ai21labs/AI21-Jamba-1.5-Mini", max_new_tokens: int = 100 ): self.model_name = model_name self.max_new_tokens = max_new_tokens def setup(self, device): # Load the model and tokenizer from Hugging Face Hub # For example, using the `distilbert-base-uncased-finetuned-sst-2-english` model for sentiment analysis # You can replace the model name with any model from the Hugging Face Hub self.model = AutoModelForCausalLM.from_pretrained( self.model_name, torch_dtype=torch.bfloat16, device_map="auto", use_mamba_kernels=False ).eval() self.tokenizer = AutoTokenizer.from_pretrained(self.model_name, legacy=False) def decode_request(self, request): # Extract text from request # This assumes the request payload is of the form: {'input': 'Your input text here'} return request["input"] def predict(self, text): print(text) # Use the loaded pipeline to perform inference inputs = self.tokenizer(text, return_tensors='pt') input_ids = inputs.to(self.model.device)["input_ids"] print(input_ids) output_ids = self.model.generate( input_ids, max_new_tokens=self.max_new_tokens )[0] print(output_ids) text = self.tokenizer.decode(output_ids, skip_special_tokens=True) print(text) return text def encode_response(self, output): # Format the output from the model to send as a response # This example sends back the label and score of the prediction return {"output": output} if __name__ == "__main__": # Create an instance of your API api = JambaLitAPI() # Start the server, specifying the port server = ls.LitServer(api, accelerator="cuda", devices=1, max_batch_size=4) # print("run the server...") server.run(port=8000) ``` ### Expected behavior <!-- A clear and concise description of what you expected to happen. --> ### Environment If you published a Studio with your bug report, we can automatically get this information. Otherwise, please describe: - PyTorch/Jax/Tensorflow Version (e.g., 1.0): - OS (e.g., Linux): - How you installed PyTorch (`conda`, `pip`, source): - Build command you used (if compiling from source): - Python version: - CUDA/cuDNN version: - GPU models and configuration: - Any other relevant information: ### Additional context <!-- Add any other context about the problem here. -->
0easy
Title: Support passing multiple input_file parameters to ``evaluate'' command Body: **Is your feature request related to a problem? Please describe.** I want to evaluate multiple datasets (same formatting, they can share the same dataset reader). The "evaluate" command takes much longer to load the model than to evaluate. **Describe the solution you'd like** support passing multiple input files and output files to the "evaluate" command **Describe alternatives you've considered** I saw this issue https://github.com/allenai/allennlp/issues/2663. I'm not sure if anyone is working on it.
0easy
Title: DOC: Add warnings for methods falling back to pandas Body:
0easy
Title: Dialogs created by `Dialogs` on Windows don't have focus Body: This makes using dialogs without a mouse inconvenient. I noticed this when working with #4634 but this is more related to #4619.
0easy
Title: Modify translation pipeline langdetect parameter to accept language detection function Body: *See [this comment](https://github.com/neuml/txtai/issues/423#issuecomment-1426777995) for details on what to implement for this issue* First and foremost, I've been getting errors while trying to install fasttext with a different package managers (pdm and poetry) and python versions (3.8-3.10). Looking at the issues in their repo, it seems that this is a [common issue](https://github.com/facebookresearch/fastText/issues/1298) for [many people](https://github.com/python-poetry/poetry/issues/6113). Someone created [a simple pull request](https://github.com/facebookresearch/fastText/pull/1292) 6 months ago to fix this and it has been ignored - as have hundreds of other Issues and PRs. The project just seems to be abandoned. But [Lingua](https://github.com/pemistahl/lingua-py) appears to be a new and very promising language detection tool. There is lots of Their benchmarks show increased accuracy, though they don't have any performance metrics for comparison. At the very least, could the fasttext dependency be switch to [this repo ](https://github.com/cfculhane/fastText) (the 6 month old bugfix PR)?
0easy
Title: [Tech debt] Improve interface for DropOut Body: * `scale_min`, `scale_max` => `scale_range` => We can update transform to use new signature, keep old as working, but mark as deprecated. ---- PR could be similar to https://github.com/albumentations-team/albumentations/pull/1704
0easy
Title: Having trouble attempting to "import pandas_ta as ta" Body: Python 3.12.1 running using PyCharm as my IDE Package Manager shows "pandas_ta with version 0.3.14b0 when I run the code: ```python import pandas as pd import requests import pandas_ta as ta ``` ```sh Traceback (most recent call last): File "D:\Users\DavidL\Documents\Python\TSNTest\TSNTest\.venv\Scripts\TSNTest.py", line 5, in <module> import pandas_ta as ta File "C:\Users\RollnWX3\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas_ta\__init__.py", line 7, in <module> from pkg_resources import get_distribution, DistributionNotFound ModuleNotFoundError: No module named 'pkg_resources' Process finished with exit code 1 ``` I do not see the _TA Lib_ listed in my package manager, and pip does not understand a space in the name I must be doing something obvious and easy to fix, and I do not see it yet. UPDATE: noted that pkg_resources is not in my list of modules, and py -m pip install pkg_resources ERROR: Could not find a version that satisfies the requirement pkg_resources (from versions: none) ERROR: No matching distribution found for pkg_resources fails.... David Here is the pip list from my python console: PS D:\Users\DavidL\Documents\Python\TSNTest\TSNTest> py -m pip list Package Version ------------------ ------------ certifi 2023.11.17 charset-normalizer 3.3.2 contourpy 1.2.0 cycler 0.12.1 fonttools 4.47.2 idna 3.6 kiwisolver 1.4.5 matplotlib 3.8.2 numpy 1.26.3 packaging 23.2 pandas 2.2.0rc0 pandas_ta 0.3.14b0 pillow 10.2.0 pip 23.3.2 pyparsing 3.1.1 python-dateutil 2.8.2 pytz 2023.3.post1 requests 2.31.0 six 1.16.0 termcolor 2.4.0 tinytag 1.10.1 tzdata 2023.4 urllib3 2.1.0 _________________________________________ **Which version are you running? The lastest version is on Github. Pip is for major releases.** ```python import pandas_ta as ta print(ta.version) ``` **Do you have _TA Lib_ also installed in your environment?** ```sh $ pip list ``` **Have you tried the _development_ version? Did it resolve the issue?** ```sh $ pip install -U git+https://github.com/twopirllc/pandas-ta.git@development ``` **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Provide sample code. **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Additional context** Add any other context about the problem here. Thanks for using Pandas TA!
0easy
Title: [Bug] [Typo] Fix convert_all.sh Body: Fix these two lines https://github.com/dmlc/gluon-nlp/blob/e0b293e9ba515f8b4c58671232bd7c1b99522ebd/scripts/conversion_toolkits/convert_all.sh#L3-L4 `convert_bert_from_tf_hub.sh` --> `convert_bert.sh` `convert_albert_from_tf_hub.sh` --> `convert_albert.sh`
0easy
Title: Marketplace - Profile description text doesn't render new-lines Body: <img src="https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/484659e5-29b1-4748-a9da-a46f73d3cc58/8e111859-ba25-4ba0-ac2f-c2fddec620ec?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi80ODQ2NTllNS0yOWIxLTQ3NDgtYTlkYS1hNDZmNzNkM2NjNTgvOGUxMTE4NTktYmEyNS00YmEwLWFjMmYtYzJmZGRlYzYyMGVjIiwiaWF0IjoxNzM0NjIxNjM5LCJleHAiOjMzMzA1MTgxNjM5fQ.iIEUDccEycfQz1JwR-aF-J1DK3wJTTIwS8gbZFRaIeE " alt="image.png" width="946" data-linear-height="330" /> <img src="https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/cf2f5730-ea3e-4351-a559-764aff49bdd6/b22fca74-cff7-4b72-a53f-515ad917649e?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi9jZjJmNTczMC1lYTNlLTQzNTEtYTU1OS03NjRhZmY0OWJkZDYvYjIyZmNhNzQtY2ZmNy00YjcyLWE1M2YtNTE1YWQ5MTc2NDllIiwiaWF0IjoxNzM0NjIxNjM5LCJleHAiOjMzMzA1MTgxNjM5fQ.H26WbCg8LC3iWyHXheI3JLmuU-3aI6Pq8EONrumbJB8 " alt="image.png" width="1120" data-linear-height="353" />
0easy
Title: pandas-profiling crashes if there are some unwanted characters in text fields Body: I am trying to apply profiler for data extracted from SAP. Data size is 1 million rows and 42 columns. I load it to dataframe dfp and use the following code: ``` pand_prof_name = os.path.join(rep_folder, "pandas_profiler.html") pandas_profiling.ProfileReport(dfp).to_file(pand_prof_name) ``` After analysis starts, something goes wrong and the following error occurs: ``` HBox(children=(HTML(value='variables'), FloatProgress(value=0.0, max=1.0), HTML(value=''))) ΠŸΡ€ΠΎΠΈΠ·ΠΎΡˆΠ»Π° ошибка.Traceback (most recent call last): File "C:\Users\groshev3-vn\Documents\20210215 backend_test\pmprofiler\profiler.py", line 365, in create_report start_pandas_profiling_report(df, rep_folder, rep_file_name, pp_max_sample) File "C:\Users\groshev3-vn\Documents\20210215 backend_test\pmprofiler\profiler.py", line 296, in start_pandas_profiling_report pandas_profiling.ProfileReport(dfp).to_file(pand_prof_name) File "c:\anaconda3\lib\site-packages\pandas_profiling\__init__.py", line 70, in __init__ description_set = describe_df(df) File "c:\anaconda3\lib\site-packages\pandas_profiling\model\describe.py", line 602, in describe executor.imap_unordered(multiprocess_1d, args) File "c:\anaconda3\lib\multiprocessing\pool.py", line 735, in next raise value File "c:\anaconda3\lib\multiprocessing\pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "c:\anaconda3\lib\site-packages\pandas_profiling\model\describe.py", line 388, in multiprocess_1d return column, describe_1d(series) File "c:\anaconda3\lib\site-packages\pandas_profiling\model\describe.py", line 368, in describe_1d type_to_func[series_description["type"]](series, series_description) File "c:\anaconda3\lib\site-packages\pandas_profiling\model\describe.py", line 167, in describe_categorical_1d stats.update(text_summary(series)) File "c:\anaconda3\lib\site-packages\visions\application\summaries\series\text_summary.py", line 30, in text_summary summary["script_values"] = {key: script(key) for key in character_counts.keys()} File "c:\anaconda3\lib\site-packages\visions\application\summaries\series\text_summary.py", line 30, in <dictcomp> summary["script_values"] = {key: script(key) for key in character_counts.keys()} File "c:\anaconda3\lib\site-packages\tangled_up_in_unicode\tangled_up_in_unicode.py", line 413136, in script key_end = end_keys[insertion_point] IndexError: list index out of range ``` Out of all data I was to find the cell with data that produces this error (See attached csv file). I use **Python 3.6**, **pip**, and both **win10** and **RHLinux**. The same error occurs no matter run it in a command shell or Jupyter Notebook. As I understood, this the only last character is the reason. Original data are in UTF-8.
0easy
Title: Bug: New `create_static_files_router` automatically adds routes to schema docs Body: ### Description I found this suprising when using the new static files feature that 1: It auto adds these to the schema 2: It adds them under a sort've "ugly" router header `default` We should fix one or both of those ### URL to code causing the issue _No response_ ### MCVE ```python from litestar import Litestar from litestar.static_files import create_static_files_router app = Litestar( route_handlers=[ create_static_files_router(directories=["assets"], path="/static"), ], ) ``` ### Steps to reproduce ```bash 1. run mcve 2. visit /schema 3. See schema for default ``` ### Screenshots <img width="508" alt="image" src="https://github.com/litestar-org/litestar/assets/45884264/a6145e1a-370f-4105-bfec-cf73bddc90cd"> ### Logs _No response_ ### Litestar Version 2.7/2.8 ### Platform - [X] Linux - [X] Mac - [X] Windows - [ ] Other (Please specify in the description above)
0easy
Title: [DOC] Wrong default value for `return_metadata` in `load_forecasting` Body: ### Describe the issue linked to the documentation The parameter `return_metadata` in `aeon.datasets.load_forecasting` is set to False, but in the docstring it is set to True ### Suggest a potential alternative/fix it should be False in the docstring
0easy
Title: ZLMA newbie question Body: hello, I am new in Pandas and Pandas Ta, here is my code: ```python data=[1,2,3,4,5,6,7,8,9,10,1,2,3,4,5,6,7,8,9,10] #etc.. df = DataFrame (data,columns=["Close"]) zlma_list= ta.zlma(pd.Series(df['Close'])) ``` How can I change the original values of ZLMA ```python def zlma(close, length=None, mamode=None, offset=0, **kwargs): ``` I tried this: ```python zlma_list= ta.zlma(close=pd.Series(df['Close'],length=21,mamode=sma)) #changing the length is crucial for me. I did changes on original codes but it should be easier. ``` but it doesnt work. also what is kwargs stand for? what is the aim of kwargs. thank you for understanding. I dont know is it good place to ask.
0easy
Title: dynDNS: Support multi-IP update in myip= query parameter Body: Title says it all. Example log: ``` Oct 13 21:49:38 digga desec/www[25547]: update.dedyn.io:443 79.231.19.56 - cmvdo.dedyn.io [13/Oct/2021:19:49:38 +0000] "GET /nic/update?hostname=[...]&myip=2003:[...]:204d,79.[...].56 HTTP/1.0" 400 179 "-" "Deutsche Telekom - Speedport Smart 4 Typ B - 010146.1.1.000.0" TLSv1.3/TLS_AES_256_GCM_SHA384 ```
0easy
Title: Remove redundant message in Page Inverse Body: We display the current state in the user interface even though the widgets are already displaying it. Is that really necessary?
0easy
Title: Saving multiple Pages to individual HTML files Body: Can we save reports with multiple pages to individual HTML files? ### Discussed in https://github.com/datapane/datapane/discussions/174 <div type='discussions-op-text'> <sup>Originally posted by **ZKpot** December 7, 2021</sup> Hello, In general, I find it very convenient that the report is saved as a singe file html document. However, sometimes the report size might become too big (in one of my use case I have a report with 60 pages, each page is app. 30 mb) and the browser cannot handle it. My current solution is to generate a single file html document for each page using datapane and load them from auto generated index page using iframe. I couldn't find a way how to embed a static html in datapane (there are some js security measures that prevent it), so I had to write the index page myself (which works well, but doesn't look that nice). It would be very nice if this feature (generate individual html files for each page) was available natively in datapane and could be enabled by setting corresponding flag when generating a report. Thank you in advance! -Nikita </div>
0easy
Title: Improvements to the Manifold Visualizer Body: The following improvements can be made to the manifold visualizer: *Note to contributors: items in the below checklist don't need to be completed in a single PR; if you see one that catches your eye, feel to pick it off the list!* - [ ] describe the number of features the embedding represents (e.g. the number of columns in X) and optionally the number of points in the dataset - [ ] create a documentation example for clustering - [x] add `frameon=True` to legend - [ ] create documentation example using the standard scalar - [ ] get the visual tests to work on windows/decrease tolerance - [ ] decrease tolerance of `test_manifold_pandas` (not sure why the tolerance is so high) - [x] investigate the `n_neighbors` attribute; see below. See #398 #399
0easy
Title: Export button is shown on detail page even if no permissions are present Body: **Describe the bug** The export button is still shown, even if the user has no permissions on the object. Clicking the button gives a 403. ![image](https://github.com/user-attachments/assets/c4c6ed1b-37b7-4410-bf56-fd95c290f128) **To Reproduce** Using the example app: 1. Create a new user and give them full permissions for 'Category'. 2. You should be able to export etc 3. Remove those permissions from the user and navigate to a category page, eg. http://localhost:8000/admin/core/category/2/change/ 4. The screen as per the image above is shown. The export button should not be present. **Versions (please complete the following information):** - Django Import Export: 4.1.1 - Python 3.8 - Django 5.1 **Expected behavior** The export button should not be present. See also #1942
0easy
Title: Different KDE implementations Body: We have experienced some issues in the past with statsmodels' KDE implementation (see in-line comments in `ridgeplot._kde.estimate_density_trace()`. - Most obvious option: [scipy.stats.gaussian_kde](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.html#scipy.stats.gaussian_kde) - Some dedicated libraries: - https://github.com/LBL-EESA/fastkde - https://github.com/tommyod/KDEpy - Python 3.13 will ship a `kde()` utility as part of the built-in `statistics` module: - https://docs.python.org/3.13/library/statistics.html#statistics.kde - https://docs.python.org/3.13/library/statistics.html#sampling-from-kernel-density-estimation - https://github.com/python/cpython/pull/115863 - https://github.com/python/cpython/issues/115532 Things to keep in mind: - Backwards compatibility with the existing `ridgeplot()` arguments that are passed to statsmodels' `KDEUnivariate` - Performance. e.g., statsmodels provides a faster FFT implementation when using the gaussian kernel. - ...more?
0easy
Title: Set METAREFRESH_IGNORE_TAGS to ["noscript"] by default Body: I was wrong in https://github.com/scrapy/scrapy/issues/3844. The default value should be `["noscript"]`, to deal with [antibot behaviors](https://github.com/scrapy/scrapy/commit/ec1ef0235f9deee0c263c9b31652d3e74a754acc). Found by @mukthy.
0easy
Title: Access Nameservers during Development Body: docker-compose exposes ports to the name servers via `docker-compose.dev.yml`, but this change stops connections: https://github.com/desec-io/desec-stack/commit/09ede83056f9b0e10c67035ddace26bd060a152b#diff-861d4dd8bb85a30abf4a91efa8d9bd5fR14
0easy
Title: Optimizer.tell doesn't check length of point Body: First, let me say how awesome this library is and thanks for all the hard work that has been put into it. I was playing around recently and made a typo that really tripped me up. The setup here is that there is some amount of validation that happens to points passed in to `Optimizer.tell`: ```python from skopt import Optimizer opt = Optimizer([('a', 'b', 'c'), (1, 10)]) opt.tell(['d', 2], 1.0) # expect ValueError since 'd' not in space Traceback (most recent call last): [output trimmed...] ValueError: Point (['d', 2]) is not within the bounds of the space ([('a', 'b', 'c'), (1, 10)]). ``` That behaves just like I would expect. But if I accidentally omit one of the dimensions of the point, then this happens: ```python >>> opt.tell(['a', ], 1.0) # I expect an error here too, but it works fine... fun: 1.0 func_vals: array([ 1., 1.]) models: [] random_state: <mtrand.RandomState object at 0x0101010102> space: Space([Categorical(categories=('a', 'b', 'c'), prior=None), Integer(low=1, high=10)]) specs: None x: ['a', 2] x_iters: [['a', 2], ['a']] >>> # I can still add more points >>> for i in range(7): ... opt.tell(opt.ask(), 1.0) ... [output trimmed...] >>> # no errors until I make it to `opt._n_initial_points` and a model is built >>> opt.tell(opt.ask(), 1.0) Traceback (most recent call last): [output trimmed...] IndexError: list index out of range ``` It looks like this happens because of using `zip` in `Space.__contains__`, which is used by `utils.check_x_in_space` which is called at the start of `Optimizer.tell`. Since `zip` will work even if the inputs don't have the same length, maybe it should be preceded by checking that `len(point) == len(self.dimensions)`? Thanks!
0easy
Title: Allow last mode to be fixed in decompositions Body: #### Problem In response to #72 and for a few months now, Tensorly has allowed to fix some modes in decompositions, e.g. using the following syntax for parafac: ```python out = parafac(tensor, rank, fixed_modes=[0,1]) ``` In the above example, the first two modes are fixed, meaning they will not be updated during the iterations of the optimization algorithm. However, for technical reasons, the last mode may not be fixed, i.e. if the data is a three-way tensor, ```python out = parafac(tensor, rank, fixed_modes=[0,2]) ``` will raise a warning and mode `2` will not be fixed. #### Solution Note the following two facts: - the last mode should never be fixed inside the optimization algorithm since the last mode update rule allows for fast error estimation. - if all modes are fixed then the decomposition should just return the initialization Therefore, a quick patch would be: - check if all modes are fixed. If true, then don't run the optimization algorithm. - Otherwise, if the last mode is fixed, then permute a non-fixed mode with the last one at the beginning of the algorithm, and permute it back in place at the end of the algorithm. #### Who can PR for this issue Anyone :) This is not a difficult fix, so feel free to tackle this issue to get started in Tensorly!
0easy
Title: Correct all mentions of Py.Cafe Body: We have generally referred to PyCafe as Py.Cafe throughout. I've confirmed with the PyCafe team and they'd rather use: * PyCafe to describe their organization * [https://py.cafe](https://py.cafe) to talk about the site itself The best way to fix this would probably be to grep through the docs content (`vizro-core` only as we don't use it in `vizro-ai` examples) for Py.Cafe. It would make sense also to add a line to the `Terms.yml` file in the [Vale styles](https://github.com/mckinsey/vizro/blob/main/.vale/styles/Microsoft/Terms.yml): ```yml Py.Cafe: PyCafe ```
0easy
Title: Refactoring: completer Body: - [ ] To prevent issues like https://github.com/xonsh/xonsh/issues/5809 in general we need to treat the completers which returns the list of empty lines like return None. Start point is [here](https://github.com/xonsh/xonsh/blob/bb75a5a53ba6b5e4ed60ae88839283075e9f47dc/xonsh/completer.py#L312). - [ ] We have a bad pattern when we have a code [bash_completion in xonsh](https://github.com/xonsh/xonsh/blob/main/xonsh/completers/bash_completion.py) that is a copy of [bash_completion in py-bash-completion](https://github.com/xonsh/py-bash-completion/blob/main/bash_completion.py) repository and the code have mismatch. We need to remove copy-pasting. My suggestion: create a xontrib in py-bash-completion, create xontrib-bash-completion package and add it as dependency to xonsh, then remove xonsh/completers/bash_completion.py. - [ ] #3914 ## For community ⬇️ **Please click the πŸ‘ reaction instead of leaving a `+1` or πŸ‘ comment**
0easy
Title: Pandas 2.1.0 and mfi indicator Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.** pandas-ta 0.3.14b python 3.12 pandas 2.1.0 **Do you have _TA Lib_ also installed in your environment?** No **Have you tried the _development_ version? Did it resolve the issue?** no, because is not compatible with python 3.12 **Describe the bug** [A clear and concise description of what the bug is.](futurewarning: setting an item of incompatible dtype is deprecated and will raise in a future error of pandas) **To Reproduce** ```python df["MFI"] = ta.mfi(high=df["High"], low=df["Low"], close=df["Close"], volume=df["Volume"], length=14, talib=False) ``` **Expected behavior** pandas 2.0.3 doesn't give that warning **Screenshots** [Reason is updated pandas behaviour](https://pandas.pydata.org/pdeps/0006-ban-upcasting.html) **Additional context** I find a function from: https://gist.github.com/quantra-go-algo/d3ced509e55ceb15eb0a6bd283e6853f#file-ti_mfi-py and fine tuned with chatgpt to execute faster than original code and pandas-ta: I couldn't correct the pandas-ta bug, so I embedded below code to my python program ```python def mfi(high, low, close, volume, n): typical_price = (high + low + close) / 3 money_flow = typical_price * volume mf_sign = np.where(typical_price > typical_price.shift(1), 1, -1) signed_mf = money_flow * mf_sign # Calculate gain and loss using vectorized operations positive_mf = np.where(signed_mf > 0, signed_mf, 0) negative_mf = np.where(signed_mf < 0, -signed_mf, 0) mf_avg_gain = pd.Series(positive_mf).rolling(n, min_periods=1).sum() mf_avg_loss = pd.Series(negative_mf).rolling(n, min_periods=1).sum() return (100 - 100 / (1 + mf_avg_gain / mf_avg_loss)).to_numpy() ``` Thanks for using Pandas TA!
0easy
Title: Type stubs Body: **Which version are you running? The lastest version is Github. Pip is for major releases.** 0.3.14b **Is your feature request related to a problem? Please describe.** Python is ever more growing towards typing. Even major projects like Pandas, Numpy, Matplotlib have added typing hinting to the projects and made type stubs available to developers. For my job, or when I create a serious project, I ensure my code is type safe. While my type checker can infer some return types from the library, the input types are always unknown. **Describe the solution you'd like** Progressively add type hints to the code base and generate type stubs to be made available to developers using pandas_ta.
0easy
Title: DEMO PAGE IS DOWN Body: demo page is down we need a new host should update with travis CI
0easy
Title: Improve description in Step 2 for the example Body: The description in Step 2 does not seem very clear, e.g., in " internal node is a parent of that internal node", it is not clear which nodes are referred to here. It would be good to walk through how one column of C is constructed. Should the second column be (1,-1,0,1)? It would be nice to have some more intuition about C and D. There is also a typo: "tenor".
0easy
Title: UserWarning thrown when importing hypertools in jupyter notebook Body: When importing hypertools in a jupyter notebook (`import hypertools as hyp`), the following warning is thrown: ``` /opt/conda/lib/python3.6/site-packages/matplotlib/__init__.py:1405: UserWarning: This call to matplotlib.use() has no effect because the backend has already been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time. warnings.warn(_use_error_msg) ``` Proposed fix: suppress this warning within hypertools (more info [here](https://docs.python.org/2/library/warnings.html))
0easy
Title: Enhancement: support constraints for a `dict` type Body: ### Summary A valid model: ```py from typing import Annotated from annotated_types import MinLen from pydantic import BaseModel from polyfactory.factories.pydantic_factory import ModelFactory class Person(BaseModel): pets_names: Annotated[dict[str, str], MinLen(1)] class PersonFactory(ModelFactory[Person]): __model__ = Person PersonFactory.build() ``` Error: ``` polyfactory.exceptions.ParameterException: received constraints for unsupported type dict[str, str] ```
0easy
Title: BUG Error while deleting a task Body: I am trying to delete a task, but Rocketry returns a KeyError. ```python import asyncio from rocketry import Rocketry from rocketry.conds import secondly app = Rocketry() @app.task(secondly, name='foo') async def foo(): ... async def main(): asyncio.create_task(app.serve()) app.session.remove_task('foo') # app.session.tasks.remove('foo') if __name__ == '__main__': asyncio.run(main()) ```
0easy
Title: Documentation README Body: ### Proposal Develop a documentation contribution guideline modeled after [pandas](https://github.com/pandas-dev/pandas/blob/master/doc/README.rst) Documentation is often seen as a safe place for new contributors to start, but Sphinx and reStructured text can be intimidating at first. The goal of the readme would be to creating an inviting document that gives a high level of how to build the docs, where to find rst documentation, how and when to use the `plot::` directive, and give some direction towards some concrete issues to tackle (ie the novice and doc tags in the issue list). Relates to issues #353 , #354 and PR #446
0easy
Title: [tech debt] Add automatic check to validate that links in Readme are valid Body: I think, there should be some tool for pre-commit hook to check that.
0easy
Title: Improve error message when object is passed to Trainer callbacks Body: ### Bug description I use torch==1.13.1 and lightning==2.2.0, and multiple callbacks in Trainer, it will raise Error "ValueError: Expected a parent". I have read the docs and github issues about this problem[https://github.com/Lightning-AI/pytorch-lightning/issues/17485](url), but the different thing is that I when I remove the TensorBoardLogger from callbacks list in Trainer, the code is working correctly. ### What version are you seeing the problem on? v2.2 ### How to reproduce the bug ```python '''python import lightning.pytorch as pl if __name__ == "__main__": id2label = {} label2id = {} for i, label in enumerate(cf.new_labels): id2label[i] = label label2id[label] = i bert_config=AutoConfig.from_pretrained(cf.base_model_path,num_labels=cf.num_labels,id2label=id2label,label2id=label2id) early_stop_callback = pl.callbacks.EarlyStopping( monitor='val_f1', min_delta=0.00, patience=20, verbose=True, mode='max' ) model_checkpoint_callback = pl.callbacks.ModelCheckpoint( monitor='val_f1', dirpath=cf.save_best_model_path, filename='best-{epoch:02d}-{val_f1:.2f}', save_top_k=1, mode='max', ) tb_logger=pl.loggers.TensorBoardLogger(cf.log_path, name=cf.log_path.strip('.json')) pl.seed_everything(42) data_module=ContrastiveDataModule() model=ContrastiveModel(bert_config) trainer=pl.Trainer(max_epochs=100,devices=[2],callbacks=[early_stop_callback,model_checkpoint_callback,tb_logger],gradient_clip_val=1) trainer.fit(model,data_module) ''' ``` ### Error messages and logs ``` Traceback (most recent call last): File "/xx/script/src/model_lightning.py", line 411, in <module> trainer=pl.Trainer(max_epochs=100,devices=[2],callbacks=[early_stop_callback,model_checkpoint_callback,tb_logger],gradient_clip_val=1) File "/home/huangfu/miniconda3/envs/autolabel/lib/python3.9/site-packages/lightning/pytorch/utilities/argparse.py", line 70, in insert_env_defaults return fn(self, **kwargs) File "/home/huangfu/miniconda3/envs/autolabel/lib/python3.9/site-packages/lightning/pytorch/trainer/trainer.py", line 430, in __init__ self._callback_connector.on_trainer_init( File "/yy/miniconda3/envs/autolabel/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/callback_connector.py", line 79, in on_trainer_init _validate_callbacks_list(self.trainer.callbacks) File "/yy/miniconda3/envs/autolabel/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/callback_connector.py", line 227, in _validate_callbacks_list stateful_callbacks = [cb for cb in callbacks if is_overridden("state_dict", instance=cb)] File "/yy/miniconda3/envs/autolabel/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/callback_connector.py", line 227, in <listcomp> stateful_callbacks = [cb for cb in callbacks if is_overridden("state_dict", instance=cb)] File "/yy/miniconda3/envs/autolabel/lib/python3.9/site-packages/lightning/pytorch/utilities/model_helpers.py", line 42, in is_overridden raise ValueError("Expected a parent") ValueError: Expected a parent ``` ### Environment <details> <summary>Current environment</summary> ``` #- PyTorch Lightning Version :2.2.0 #- PyTorch Version: 1.13.1 #- Python version:3.9.19 #- OS: Linux #- CUDA/cuDNN version: 11.7 #- GPU models and configuration: A custom model based on BERT #- How you installed Lightning(`conda`, `pip`, source): conda ``` </details> ### More info _No response_ cc @borda
0easy
Title: `gc --not-in-remote`: misleading warning message Body: # Bug Report ## Description The warning message printed when using the `--not-in-remote` flag appears to be incorrect. The behavior of the flag appears correct (consistent with the documentation); the warning message is not consistent with behavior or documentation. ### Reproduce `dvc gc --workspace --not-in-remote` Prints the following warning: ``` WARNING: This will remove all cache except items used in the workspace that are present in the DVC remote of the current repo. Are you sure you want to proceed? [y/n]: ``` The actual behavior for this command is to keep only the local cache items that are: a) referenced in the current workspace, and b) not present in the remote. ### Expected The warning message should read, perhaps, ``` WARNING: This will remove all cache except items used in the workspace that are not present in the DVC remote of the current repo. Are you sure you want to proceed? [y/n]: ``` (That is, just adding the word "not".) ### Environment information **Output of `dvc doctor`:** ```console $ dvc doctor ``` ``` DVC version: 3.55.2 (pip) ------------------------- Platform: Python 3.12.7 on macOS-15.0.1-x86_64-i386-64bit Subprojects: dvc_data = 3.16.6 dvc_objects = 5.1.0 dvc_render = 1.0.2 dvc_task = 0.40.2 scmrepo = 3.3.8 Supports: http (aiohttp = 3.10.10, aiohttp-retry = 2.8.3), https (aiohttp = 3.10.10, aiohttp-retry = 2.8.3), s3 (s3fs = 2024.9.0, boto3 = 1.35.36) Config: Global: /Users/dan/Library/Application Support/dvc System: /Library/Application Support/dvc Cache types: reflink, hardlink, symlink Cache directory: apfs on /dev/disk1s1s1 Caches: local Remotes: s3 Workspace directory: apfs on /dev/disk1s1s1 Repo: dvc, git Repo.site_cache_dir: /Library/Caches/dvc/repo/a60ba71a9b85f3c8d2c283884465924b ```
0easy
Title: Can't query by id? Body: Thanks for a great library. After starting examples/flask_mongoengine : I expected I would be able to query by id? I'm unable to query by id, but can query by name ``` { allEmployees(id:"RW1wbG95ZWU6NWE5ZjE4NzA2MGJmZmVhZDcyYmI3Zjgw") { pageInfo { hasNextPage } edges { node { id, name department { id, name } } } } } { "errors": [ { "message": "u'RW1wbG95ZWU6NWE5ZjE4NzA2MGJmZmVhZDcyYmI3Zjgw' is not a valid ObjectId, it must be a 12-byte input or a 24-character hex string", "locations": [ { "column": 3, "line": 13 } ] } ], "data": { "allEmployees": null } } ``` ``` { allEmployees( name: "Peter") { pageInfo { hasNextPage } edges { node { id, name department { id, name } } } } } { "data": { "allEmployees": { "pageInfo": { "hasNextPage": false }, "edges": [ { "node": { "id": "RW1wbG95ZWU6NWE5ZjE4NzA2MGJmZmVhZDcyYmI3Zjgw", "name": "Peter", "department": { "id": "RGVwYXJ0bWVudDo1YTlmMTg3MDYwYmZmZWFkNzJiYjdmN2M=", "name": "Engineering" } } } ] } } } ```
0easy
Title: Bug: constraints not working in pydantic v2 (when a `Union` is used) Body: ## Preface There was a confirmed bug https://github.com/litestar-org/polyfactory/issues/307 but it was not completely fixed. Using `Annotated` was fixed but using `Annotated[<type>, <...>] | None` still fails. ## Reproducible sample ```py from typing import Annotated from annotated_types import Ge, Le from pydantic import BaseModel LongLat = tuple[Annotated[float, Ge(-180), Le(180)], Annotated[float, Ge(-90), Le(90)]] class Location(BaseModel): long_lat: LongLat | None from polyfactory.factories.pydantic_factory import ModelFactory class LocationFactory(ModelFactory[Location]): __model__ = Location # Run multiple times because we have a union of types and we must trigger a not-None option. for _ in range(100): LocationFactory.build() ``` ## Error output ``` long_lat.0 Input should be greater than or equal to -180 [type=greater_than_equal, input_value=-7760968059339.52, input_type=float] For further information visit https://errors.pydantic.dev/2.1/v/greater_than_equal long_lat.1 Input should be less than or equal to 90 [type=less_than_equal, input_value=7865.80622030079, input_type=float] For further information visit https://errors.pydantic.dev/2.1/v/less_than_equal ``` ## Versions ``` Name: polyfactory Version: 2.7.0 ```
0easy
Title: Add Python API documentation / Module reference to the documentation/website Body: **Is your feature request related to a problem? Please describe.** As explained in #996 dynaconf 2.2.3 has a [Module reference](https://dynaconf.readthedocs.io/en/docs_223/reference/dynaconf.html#) where I can read see all the methods of dynaconf.LazySettings together with a description. I don't see anything similar in https://dynaconf.com/ , @rochacbruno, confirmed that > Hi, since we migrated to mkdocs there is **no API reference documented**, there is a mkdocs plugin that can generate that we just need to setup it. **Describe the solution you'd like** @rochacbruno and @pedro-psb mentioned that there is a mkdocs plugin called [mkdocstring](https://mkdocstrings.github.io/) that can be used to generate that kind of api documentation . So I guess this mkdocs plugin should be configured/setup/included in the pipeline that generates the website
0easy
Title: Bug in popmon_dp_loader_example.ipynb (popmon version incompatibility) Body: **General Information:** Popmons current version is not compatible with the notebook we have written for integration **Describe the bug:** ``` --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In[6], line 3 1 pm_data = popmon_dataloader(path, time_index) ----> 3 report_pm_loader = pm_data.pm_stability_report( 4 time_axis=time_index, 5 time_width="1w", 6 time_offset="2015-07-02", 7 extended_report=False, 8 pull_rules={"*_pull": [10, 7, -7, -10]}, 9 ) 11 # Save popmon reports 12 report_pm_loader.to_file(os.path.join(report_output_dir, "popmon_loader_report.html")) File ~/DataProfiler/venv/lib/python3.8/site-packages/popmon/pipeline/report.py:115, in df_stability_report(df, settings, time_width, time_offset, var_dtype, reference, split, **kwargs) 91 """Create a data stability monitoring html report for given pandas or spark dataframe. 92 93 :param df: input pandas/spark dataframe to be profiled and monitored over time. (...) 111 :return: dict with results of reporting pipeline 112 """ 114 if settings is None: --> 115 settings = Settings(**kwargs) 117 if len(settings.time_axis) == 0: 118 settings._set_time_axis_dataframe(df) File ~/DataProfiler/venv/lib/python3.8/site-packages/pydantic_settings/main.py:71, in BaseSettings.__init__(__pydantic_self__, _case_sensitive, _env_prefix, _env_file, _env_file_encoding, _env_nested_delimiter, _secrets_dir, **values) 60 def __init__( 61 __pydantic_self__, 62 _case_sensitive: bool | None = None, (...) 69 ) -> None: 70 # Uses something other than `self` the first arg to allow "self" as a settable attribute ---> 71 super().__init__( 72 **__pydantic_self__._settings_build_values( 73 values, 74 _case_sensitive=_case_sensitive, 75 _env_prefix=_env_prefix, 76 _env_file=_env_file, 77 _env_file_encoding=_env_file_encoding, 78 _env_nested_delimiter=_env_nested_delimiter, 79 _secrets_dir=_secrets_dir, 80 ) 81 ) File ~/DataProfiler/venv/lib/python3.8/site-packages/pydantic/main.py:159, in BaseModel.__init__(__pydantic_self__, **data) 157 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks 158 __tracebackhide__ = True --> 159 __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__) ValidationError: 2 validation errors for Settings extended_report Extra inputs are not permitted [type=extra_forbidden, input_value=False, input_type=bool] For further information visit https://errors.pydantic.dev/2.1/v/extra_forbidden pull_rules Extra inputs are not permitted [type=extra_forbidden, input_value={'*_pull': [10, 7, -7, -10]}, input_type=dict] For further information visit https://errors.pydantic.dev/2.1/v/extra_forbidden ``` **To Reproduce:** 1. pip install popmon 2. run all cells in the popmon_dp_loader_example.ipynb **Expected behavior:** Notebook will error on 6th cell with error described above
0easy
Title: Better documentation about slack_sdk.models usage in Web APIs Body: We can improve the docstrings and the documentation website to provide more information about models. * Add more info about models support in chat.* API docstrings * Add a section about models in the website: https://slack.dev/python-slack-sdk/ --- Interesting thanks! I definitely think this area of the package is pretty useful. IMO: it is a lot easier to read and understand from a code perspective than a large nested dictionary, and if someone is not using the block kit builder it is definitely a lot easier to write the code using the models. > No, you can directly pass an array of model objects as blocks keyword argument. Thanks, I would have had no idea about this, the doc string of `chat_postMessage` does not suggest this is possible. > For the above reason, we haven't been promoting the feature. `slack_sdk.models` is mentioned in the first few paragraphs of the readme ("easy-to-use builders"), maybe worth adding a note as this might mislead other developers as it has me. _Originally posted by @invokermain in https://github.com/slackapi/python-slack-sdk/issues/986#issuecomment-811106960_
0easy
Title: Clean up Python warnings Body: Running test suite, looks like Python starting to complain about some things. Change over as necessary to avoid future deprecation problems ``` tests/test_package.py::test_map <frozen importlib._bootstrap>:488: DeprecationWarning: Type google.protobuf.pyext._message.ScalarMapContainer uses PyType_Spec with a metaclass that has custom tp_new. This is deprecated and will no longer be allowed in Python 3.14. tests/test_package.py::test_map <frozen importlib._bootstrap>:488: DeprecationWarning: Type google.protobuf.pyext._message.MessageMapContainer uses PyType_Spec with a metaclass that has custom tp_new. This is deprecated and will no longer be allowed in Python 3.14. tests/test_package.py::test_map /Users/randyzwitch/miniconda3/envs/streamlit-folium/lib/python3.12/site-packages/google/protobuf/internal/well_known_types.py:91: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC). _EPOCH_DATETIME_NAIVE = datetime.datetime.utcfromtimestamp(0) ``` ``` ........{'x': 24, 'y': 1744.359375, 'width': 452, 'height': 700} {'x': 80, 'y': 2279.375, 'width': 840, 'height': 1511} ..../Users/randyzwitch/github/streamlit-folium/examples/pages/geojson_popup.py:35: FutureWarning: Passing literal json to 'read_json' is deprecated and will be removed in a future version. To read from a literal string, wrap it in a 'StringIO' object. abbrs = pd.read_json(response.text) /Users/randyzwitch/miniconda3/envs/streamlit-folium/lib/python3.12/site-packages/streamlit/util.py:227: RuntimeWarning: coroutine 'expire_cache' was never awaited pass RuntimeWarning: Enable tracemalloc to get the object allocation traceback /Users/randyzwitch/miniconda3/envs/streamlit-folium/lib/python3.12/site-packages/streamlit/util.py:227: RuntimeWarning: coroutine 'expire_cache' was never awaited pass RuntimeWarning: Enable tracemalloc to get the object allocation traceback .F tests/test_package.py ....... tests/test_release.py . ```
0easy
Title: nslord: stop publishing DS records with SHA-1 Body: In agreement with [RFC 4509 Sec. 3](https://tools.ietf.org/html/rfc4509#section-3): > Implementations MUST support the use of the SHA-256 algorithm in DS RRs. Validator implementations SHOULD ignore DS RRs containing SHA-1 digests if DS RRs with SHA-256 digests are present in the DS RRset. This requires a change [here](https://github.com/desec-io/desec-stack/blob/master/nslord/conf/pdns.conf.var#L7), and ideally (but not necessarily) delegation updates of the domains we are authoritative for.
0easy
Title: Conda package Body: I tried to install it using conda anc conda package but the package cannot be found. Please make it available there Thank you for creating this awesome package!
0easy
Title: CI: code style checks Body: Beyond just enforcing `black` code style, we also want to run some extra CI checks to flag bad practices like: - `import *` - imports that are unused in the code Some of these may be best addressed via tools like: https://launchpad.net/pyflakes https://pylint.pycqa.org/en/latest/
0easy
Title: Enhancement: ModelView.find_all and ModelView.count still don't take into account the Request for the stmt - needed for multitenant support Body: **Describe the bug** As already suggested by me in https://github.com/jowilf/starlette-admin/issues/274 , views.py.ModelView.find_all and views.py.ModelView.count still do not take into account the Request when building the stmt. This forces people to subclass this class, and copy/paste code every time a startlette-admin version upgrade comes out. **To Reproduce** Try to filter records based on the logged user, properly supporting multi tenancy. The problem is in the line stmt = self.get_list_query().offset(skip) , in both methods. **Environment (please complete the following information):** * Starlette-Admin version: 0.13.1 * ORM/ODMs: SQLAlchemy, psycopg2-binary==2.9.9 * SQLAlchemy-serializer==1.4.1 **Suggested fix** 1) Add these 2 methods in views.py.ModelView: ``` def get_list_query_for_request(self, request: Request): return self.get_list_query() # backwards compatibility def get_count_query_for_request(self, request: Request): return self.get_count_query() # backwards compatibility ``` 2) In views.py.ModelView.find_all, replace this line: ``` stmt = self.get_list_query().offset(skip) ``` with this line: ``` stmt = self.get_list_query_for_request(request).offset(skip) ``` 3) In views.py.ModelView.count, replace this line: ``` stmt = self.get_count_query() ``` with this line: ``` stmt = self.get_count_query_for_request(request) ``` This will make things work, and it will give people a hook to properly filter based on the logged user (their id). And, the best of all: it is backwards-compatible. **Additional context** Without the support above, I am force to do this every time I upgrade starlet-admin: 1) Copy both methods into my own custom subclass called WorakaroundModelView, fixing imports and hoping things will still work 2) Find the spots and apply the changes again
0easy
Title: filters on Field should be a Collection rather than List Body: Currently `filters` on `Field` is defined to be a list (in which order is preserved. Rather it should be a `Collection` (where order is not preserved). A set of `Filter`s should be allowed. https://github.com/betodealmeida/shillelagh/blob/97197bd564e96a23c5587be5c9e315f7c0e693ea/src/shillelagh/fields.py#L177-L182 Further, the`__eq__` should then compare the filters as sets ignoring order: https://github.com/betodealmeida/shillelagh/blob/97197bd564e96a23c5587be5c9e315f7c0e693ea/src/shillelagh/fields.py#L196
0easy
Title: write error when adding user avatar or webite Body: otherwise running perfectly on pythonanywhere without celery. It might be a wtform validation issue? Getting this error: `Traceback (most recent call last): File "/usr/lib/python3.5/site-packages/flask/app.py", line 2292, in wsgi_app response = self.full_dispatch_request() File "/usr/lib/python3.5/site-packages/flask/app.py", line 1815, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/lib/python3.5/site-packages/flask/app.py", line 1718, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/lib/python3.5/site-packages/flask/_compat.py", line 35, in reraise raise value File "/usr/lib/python3.5/site-packages/flask/app.py", line 1813, in full_dispatch_request rv = self.dispatch_request() File "/usr/lib/python3.5/site-packages/flask/app.py", line 1799, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/usr/lib/python3.5/site-packages/flask_login/utils.py", line 261, in decorated_view return func(*args, **kwargs) File "/usr/lib/python3.5/site-packages/flask/views.py", line 88, in view return self.dispatch_request(*args, **kwargs) File "/usr/lib/python3.5/site-packages/flask/views.py", line 158, in dispatch_request return meth(*args, **kwargs) File "/home/mimesis/flaskbb/flaskbb/user/views.py", line 155, in post if self.form.validate_on_submit(): File "/usr/lib/python3.5/site-packages/flask_wtf/form.py", line 101, in validate_on_submit return self.is_submitted() and self.validate() File "/usr/lib/python3.5/site-packages/wtforms/form.py", line 310, in validate return super(Form, self).validate(extra) File "/usr/lib/python3.5/site-packages/wtforms/form.py", line 152, in validate if not field.validate(self, extra): File "/usr/lib/python3.5/site-packages/wtforms/fields/core.py", line 206, in validate stop_validation = self._run_validation_chain(form, chain) File "/usr/lib/python3.5/site-packages/wtforms/fields/core.py", line 226, in _run_validation_chain validator(form, self) File "/usr/lib/python3.5/site-packages/wtforms/validators.py", line 432, in __call__ message = field.gettext('Invalid URL.') File "/usr/lib/python3.5/site-packages/wtforms/fields/core.py", line 166, in gettext return self._translations.gettext(string) File "/usr/lib/python3.5/site-packages/flask_wtf/i18n.py", line 48, in gettext t = _get_translations() File "/usr/lib/python3.5/site-packages/flask_wtf/i18n.py", line 40, in _get_translations dirname, [get_locale()], domain='wtforms' File "/usr/lib/python3.5/site-packages/flask_babel/__init__.py", line 254, in get_locale if babel.locale_selector_func is None: AttributeError: '_BabelState' object has no attribute 'locale_selector_func'`
0easy
Title: Run coverage tests only when source code is modified in PR Body: # Brief Description For PRs that only touch the infrastructure code (e.g. README.md, or `.pre-commit-config.yaml` and the likes), we need not waste resources running coverage. Also, coverage is flaky sometimes. Exceptions: `.codecov.yml`, `janitor/*`, and `tests/*` ? Commits into `dev` should still run coverage.
0easy
Title: Downloader fails when download_dir is a nested folder that doesn't exist Body: ### Problem When the download folder doesn't exist, it is created by the downloader. But when that folder is nested, the creation fails. ### Code example ```python nltk.download("punkt", force=True, download_dir="some/nested/folder") ``` Will give the following error: ``` File "my_env/lib/python3.7/site-packages/nltk/downloader.py", line 776, in download for msg in self.incr_download(info_or_id, download_dir, force): File "my_env/lib/python3.7/site-packages/nltk/downloader.py", line 641, in incr_download yield from self._download_package(info, download_dir, force) File "my_env/lib/python3.7/site-packages/nltk/downloader.py", line 698, in _download_package os.mkdir(download_dir) FileNotFoundError: [Errno 2] No such file or directory: 'some/nested/folder' ``` ### Suggested solution The following line throws the error: https://github.com/nltk/nltk/blob/develop/nltk/downloader.py#L698 `os.makedirs(download_dir)` could be used instead of `os.mkdir(download_dir)`
0easy