text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: [Apibench] Resume interrupted LLM generations from last generation
Body: **Describe the issue**
When we try to launch LLM generations on APIBench, if the generation is interrupted midway, currently the partial file is deleted and restart generation from start. What we should instead do is check where the generation stopped and resume from the last generated index. This is specially critical for large models where generation can take >1 hours.
**ID datapoint**
2. Provider: This affects all 3: TorchHub/HuggingFace/PyTorch Hub
File that needs to be edited: https://github.com/ShishirPatil/gorilla/blob/main/eval/get_llm_responses.py
| 0easy
|
Title: VWMA calculation is werid
Body: I think VWMA calculation is werid, since it does not take the **weighted by recent value** into consideration.
It uses
```python
pv = close * volume
vwma = sma(close=pv, length=length) / sma(close=volume, length=length)
```
for wma calculation [check out this](https://www.fidelity.com/learning-center/trading-investing/technical-analysis/technical-indicator-guide/wma), it should use the same with adding volume as an additional parameter
```python
WMA = (P1 * 5) + (P2 * 4) + (P3 * 3) + (P4 * 2) + (P5 * 1) / (5 + 4+ 3 + 2 + 1)
VWMA = ?
```
right? what do you think?
TradingView is using the same as the current calculation https://www.tradingview.com/pine-script-reference/v3/#fun_vwma
Maybe:
WMA is weighted by recent value
VWMA is weighted by volume without of recent value
if that is the case. then please feel free to close this | 0easy
|
Title: Documentation overhaul (for clarity)
Body: - People sometimes get confused by our `curl` commands, and there is also an inconsistency between placeholders in URL and non-URL (`:name` vs. `{token}`). Let's use braces style (consistent with OpenAPI)
- Under Windows, `curl` apparently works like this (cf. email to me on 2020-04-04):
```
echo {"email": "youremailaddress@example.com", "password": "yourpassword"} | curl -X POST https://desec.io/api/v1/auth/login/ --header "Content-Type: application/json" --data @-
```
The user also sent a step-by-step instruction for Windows on 2020-04-07. Consider incorporating it somewhere.
- replace `password='[password]'` with `password='[TOKEN]'` to https://desec.readthedocs.io/en/latest/dyndns/configure.html#option-2-use-ddclient, and add "To get the Token: Login with your credentials defined in the registration process." | 0easy
|
Title: Lazy listeners do not work with Chalice local testing server
Body: I want to perform operations when a member has joined a channel which takes more than 5 seconds, but I'm getting **BrokenPipeError: [Errno 32] Broken pipe**.
The minimal step to reproduce the same has been shown below while using the lazy listener it waits for 10 seconds before responding back.
### Reproducible in:
```python
from chalice import Chalice, Response
from slack_bolt import App
from slack_bolt.adapter.aws_lambda.chalice_handler import ChaliceSlackRequestHandler
bolt_app = App(process_before_response=True)
app = Chalice(app_name='chalice_app_name')
slack_handler = ChaliceSlackRequestHandler(app=bolt_app, chalice=app)
@bolt_app.event("message")
def handle_message_events(body, logger):
logger.info(body['event']['text'])
def respond_to_slack_within_3_seconds(ack):
ack("Accepted!")
def say_it(say):
time.sleep(10)
say("Done!")
bolt_app.event("member_joined_channel")(
ack=respond_to_slack_within_3_seconds, lazy=[say_it]
)
@app.route(
"/slack/events",
methods=["POST"],
content_types=["application/x-www-form-urlencoded", "application/json"],
)
def events() -> Response:
return slack_handler.handle(app.current_request)
```
#### The `slack_bolt` version
slack-bolt==1.9.1
slack-sdk==3.11.1
#### Python runtime version
Python 3.6.15
#### OS info
Ubuntu 20.04.3 LTS
#### Steps to reproduce:
1. While handling of any Slack event (e.g. handle_member_joined event)
2. Use lazy listener and wait for more than 3 seconds before responding back to Slack
### Expected result:
Since the event is acknowledging the Slack within 3 seconds irrespective of the lazy listener function time, it should have processed the same.
### Actual result:


## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| 0easy
|
Title: rename dataframe by df in conftest
Body: | 0easy
|
Title: PairwiseDistance with bottleneck metric returns non-symmetric matrix when delta is not zero
Body: Test code:
```
from sklearn.datasets import load_digits
from gtda.homology import CubicalPersistence
from gtda.diagrams import PairwiseDistance
X, _ = load_digits(return_X_y=True)
nb_samples = 100
CP = CubicalPersistence(homology_dimensions=(0, 1))
X_diag = CP.fit_transform(X[:nb_samples].reshape(nb_samples, 8, 8))
PD = PairwiseDistance(
metric='bottleneck', metric_params={'delta': 0.01}, order=None)
X_distance = PD.fit_transform(X_diag)
```
The output returns, for example, `X_distance[:, :, 0]` which is not symmetric. As noticed by @ulupo in [this discussion](https://groups.io/g/dionysus/topic/bottleneck_distance_is_not/22450341?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,22450341), the computation of the bottleneck distance is made by Hera and when delta is not zero the calculations are approximate and symmetry is not guaranteed.
The documentation of PairwiseDistance should include this explanation. | 0easy
|
Title: Problem with EOM indicator
Body: Hi
I am trying get the eom indicator:
`eom = ta.eom(dfrates['high'],dfrates['low'],dfrates['close'],dfrates['volume'],14)`
And get this traceback:
```
File "d:\Jhon\Desktop\Proyecto\Ejemplos\TA\bot_con_ta.py", line 10, in <module> fn.ModeloNN()
File "d:\Jhon\Desktop\Proyecto\Ejemplos\TA\funciones_con_ta.py", line 560, in ModeloNN EntrenadorNN()
File "d:\Jhon\Desktop\Proyecto\Ejemplos\TA\funciones_con_ta.py", line 465, in EntrenadorNN
X_train,Y_train,X_test,Y_test = Datos('entr',bars)
File "d:\Jhon\Desktop\Proyecto\Ejemplos\TA\funciones_con_ta.py", line 290, in Datos
eom = ta.eom(dfrates['high'],dfrates['low'],dfrates['close'],dfrates['volume'],14)
File "C:\Program Files\Python310\lib\site-packages\pandas_ta\volume\eom.py", line 27, in eom
eom = sma(eom, length=length)
File "C:\Program Files\Python310\lib\site-packages\pandas_ta\overlap\sma.py", line 21, in sma
sma = SMA(close, length)
File "C:\Program Files\Python310\lib\site-packages\talib\__init__.py", line 64, in wrapper
result = func(*_args, **_kwds)
File "talib/_func.pxi", line 4538, in talib._ta_lib.SMA
File "talib/_func.pxi", line 68, in talib._ta_lib.check_begidx1
Exception: inputs are all NaN
``` | 0easy
|
Title: Improvements to PrecisionRecallCurve
Body: There are a few things that can be updated in the `PrecisionRecallCurve` visualizer:
- [x] allow the user to specify colors as a list of colors or a colormap and use `resolve_colors` in `_draw_multiclass`.
- [ ] refactor shared `_get_y_scores` as a mixin, utility function, or shared base class for `ROCAUC` and `PrecisionRecallCurve`.
- [x] Allow user to customize the values of the iso_f1_curves (hard coded to `[0.2,0.4,.06,0.8]` now).
- [x] Allow user to input training and testing set instead of using `train_test_split` in the quick method.
- [x] Remove grid in multiclass case so it's not as busy
See also #602
| 0easy
|
Title: Dask.Visualize ignoring engine setting
Body: Hello guys!
I am running some codes on a jupiter notebook remote server. As I canยดt install graphviz, I am trying to use ipycytoscape to renderize the visualize results.
If I run a example code, it does create an html file containing the visualization
```
import dask
import dask.array as da
with dask.config.set({"visualization.engine": "cytoscape"}):
x = da.ones((15, 15), chunks=(5, 5))
y = x + x.T
y.visualize()
```

So far, no issue.
But as soon as I tries do display my real dask.dataframe graphs, it ignores the parameters and tries to use graphviz
import dask
```
df = dask.dataframe.read_parquet(path= 'Temp/', dtype_backend='pyarrow')
with dask.config.set({"visualization.engine": "cytoscape"}):
df.visualize()
```

```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
File /projeto/libs/lib/python3.11/site-packages/dask/utils.py:327, in import_required(mod_name, error_msg)
326 try:
--> 327 return import_module(mod_name)
328 except ImportError as e:
File /usr/local/lib/python3.11/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1206, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1178, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1142, in _find_and_load_unlocked(name, import_)
ModuleNotFoundError: No module named 'graphviz'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
Cell In[26], line 6
4 df = dd.read_parquet(path= 'Temp/', dtype_backend='pyarrow')
5 with dask.config.set({"visualization.engine": "cytoscape"}):
----> 6 df.visualize()
File /projeto/libs/lib/python3.11/site-packages/dask_expr/_collection.py:643, in FrameBase.visualize(self, tasks, **kwargs)
641 if tasks:
642 return super().visualize(**kwargs)
--> 643 return self.expr.visualize(**kwargs)
File /projeto/libs/lib/python3.11/site-packages/dask_expr/_core.py:715, in Expr.visualize(self, filename, format, **kwargs)
697 """
698 Visualize the expression graph.
699 Requires ``graphviz`` to be installed.
(...)
711 Additional keyword arguments to forward to ``to_graphviz``.
712 """
713 from dask.dot import graphviz_to_file
--> 715 g = self._to_graphviz(**kwargs)
716 graphviz_to_file(g, filename, format)
717 return g
File /projeto/libs/lib/python3.11/site-packages/dask_expr/_core.py:631, in Expr._to_graphviz(self, rankdir, graph_attr, node_attr, edge_attr, **kwargs)
621 def _to_graphviz(
622 self,
623 rankdir="BT",
(...)
627 **kwargs,
628 ):
629 from dask.dot import label, name
--> 631 graphviz = import_required(
632 "graphviz",
633 "Drawing dask graphs with the graphviz visualization engine requires the `graphviz` "
634 "python library and the `graphviz` system library.\n\n"
635 "Please either conda or pip install as follows:\n\n"
636 " conda install python-graphviz # either conda install\n"
637 " python -m pip install graphviz # or pip install and follow installation instructions",
638 )
640 graph_attr = graph_attr or {}
641 node_attr = node_attr or {}
File /projeto/libs/lib/python3.11/site-packages/dask/utils.py:329, in import_required(mod_name, error_msg)
327 return import_module(mod_name)
328 except ImportError as e:
--> 329 raise RuntimeError(error_msg) from e
RuntimeError: Drawing dask graphs with the graphviz visualization engine requires the `graphviz` python library and the `graphviz` system library.
Please either conda or pip install as follows:
conda install python-graphviz # either conda install
python -m pip install graphviz # or pip install and follow installation instructions
```
Am I doing anythong wrong?
**Environment**:
- Dask version: 2024.10.0
- Python version: 3.11.9
- Install method (conda, pip, source): PIP
| 0easy
|
Title: Marketplace - search results - change line height
Body:
### Describe your issue.
Change line height of the search term from 2 to 32px
<img width="1410" alt="Screenshot 2024-12-13 at 21 08 39" src="https://github.com/user-attachments/assets/92923348-6b60-4a3e-bda4-da1aaf7c414c" />
| 0easy
|
Title: [BUG] PLForecastingModule._calculate_metrics should call log_dict with batch_size
Body: **Describe the bug**
When using additional metrics with PyTorch Lightning forecasting models, due to `PLForecastingModule._calculate_metrics` (called from `train_step()` or `validation_step()`) not calling `self.log_dict(...)` with the `batch_size=` parameter, the Trainer (`_ResultCollection.log()` is complaining about ambiguity (when `show_warnings=True` is enabled on the model):
> /opt/homebrew/Caskroom/miniconda/base/envs/tf/lib/python3.12/site-packages/pytorch_lightning/utilities/data.py:77: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 32. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`.
**To Reproduce**
Add a custom metric via `torch_metrics=...` model parameter (e.g. BlockRNNModel), and run .fit().
**Expected behavior**
The warnings should be gone.
**System (please complete the following information):**
- Python version: 3.12.3
- darts version: 0.29.0
**Additional context**
suggested code modification (tested):
```python
def _calculate_metrics(self, output, target, metrics):
if not len(metrics):
return
if self.likelihood:
_metric = metrics(self.likelihood.sample(output), target)
else:
# If there's no likelihood, nr_params=1, and we need to squeeze out the
# last dimension of model output, for properly computing the metric.
_metric = metrics(output.squeeze(dim=-1), target)
self.log_dict(
_metric,
on_epoch=True,
on_step=False,
logger=True,
prog_bar=True,
sync_dist=True,
batch_size=target.shape[0], # ADD THIS LINE
)
``` | 0easy
|
Title: The problem of evaluation
Body: ๆๅจ่ฟ่กtest.pyๆฒกๆไปไน้ฎ้ข๏ผไฝๆฏๅจ่ท่ฏไผฐ่ฟ่กๅฐeval success็ๆถๅๆฅ้ไบ:
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "E:\Anaconda3\envs\pysot\lib\multiprocessing\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "E:\pysot\toolkit\evaluation\ope_benchmark.py", line 50, in eval_success
success_ret_[video.name] = success_overlap(gt_traj, tracker_traj, n_frame)
File "E:\pysot\toolkit\utils\statistics.py", line 105, in success_overlap
iou[mask] = overlap_ratio(gt_bb[mask], result_bb[mask])
IndexError: too many indices for array
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "E:/pysot/tools/eval.py", line 146, in <module>
main()
File "E:/pysot/tools/eval.py", line 52, in main
trackers), desc='eval success', total=len(trackers), ncols=100):
File "E:\Anaconda3\envs\pysot\lib\site-packages\tqdm\std.py", line 1107, in __iter__
for obj in iterable:
File "E:\Anaconda3\envs\pysot\lib\multiprocessing\pool.py", line 748, in next
raise value
IndexError: too many indices for array
่ฏท้ฎ่ฟๆฏpytorch็็ๆฌ้ฎ้ขๅ๏ผๆ็pytorch็ๆฌๆฏ0.4.1 ๏ผcuda8 | 0easy
|
Title: `NoiseModelFromGoogleNoiseProperties` should raise an error when simulations use qubits not on the device or an uncompiled circuit
Body: **Description of the issue**
[Noisy Simulation](https://quantumai.google/cirq/simulate/noisy_simulation#simulation_with_realistic_noise) describes how to run simulations with realistic noise obtained from a real quantum device. however when using that noise model with a gate that is not native to the device the simulation, no noise gets added (e.g. https://github.com/quantumlib/Cirq/issues/6607#issuecomment-2118721221). another issue is that when the circtuit uses qubits not on the device a key error is raised with little information explaining why that is.
**Proposed Solution**
Before (or while) running the simulation validate that the qubits and gets in the circuit uses supported qubits and gates.
**How to reproduce the issue**
https://github.com/quantumlib/Cirq/issues/6607
**Cirq version**
`1.4.0.dev20240419073809`
| 0easy
|
Title: [DOC] Binary Segmentation Estimator webpage missing
Body: #### Describe the issue linked to the documentation
The link from the estimator overview page ( https://www.sktime.net/en/stable/estimator_overview.html ) to Binary Segmentation is :
https://www.sktime.net/en/stable/api_reference/auto_generated/sktime.annotation.bs.BinarySegmentation.html
which is returning a 404 not found.
| 0easy
|
Title: Issues Active metric API
Body: The canonical definition is here: https://chaoss.community/?p=3632 | 0easy
|
Title: SyntaxWarnings
Body: While running an `apt upgrade` I noticed:
```
/usr/lib/python3/dist-packages/pyqtgraph/examples/SpinBox.py:38: SyntaxWarning: invalid escape sequence '\$'
regex='\$?(?P<number>(-?\d+(\.\d+)?)|(-?\.\d+))$')),
```
The `\$` should be written `\\$` or `r'\$'` since a few Python releases (same for all backslash escape that have no meanings). I don't have the time to search for other occurrences of this fact, but running the tests with `PYTHONDEVMODE=1` should help spotting them :) | 0easy
|
Title: Provide an extensive example of API usage in go-bot
Body: The go-bot is able ([example](https://github.com/deepmipt/DeepPavlov/blob/master/examples/gobot_extended_tutorial.ipynb)) to query an API to get the data. It allows to perform the dialog relying onto some explicit knowledge.
In the provided example the bot queries the database to perform some read operations. Operations of other classes (e.g. update) seem to be possible but we have no examples of such cases.
**The contribution could follow these steps**:
* discover the purpose of update- queries in goal-oriented datasets
* find the dataset with such api calls or generate an artificial one
* provide an example of the go-bot trained on this dataset and using this data | 0easy
|
Title: Add Flower Baseline: FedDefender
Body: ### Paper
Gill, Waris and Anwar, Ali and Gulzar, Muhammad Ali(2023) - FedDefender: Backdoor Attack Defense in Federated Learning
### Link
https://arxiv.org/abs/2307.08672
### Maybe give motivations about why the paper should be implemented as a baseline.
FedDefender is a defense against backdoor attacks in federated learning by leveraging differential testing for FL. FedDefender minimizes the impact of a malicious client on the global model by limiting its contribution to the aggregated global model. FedDefender fingerprints the neuron activations of clientsโ models on the same input and uses differential testing to identify potential malicious clients.
### Is there something else you want to add?
_No response_
### Implementation
#### To implement this baseline, it is recommended to do the following items in that order:
### For first time contributors
- [x] Read the [`first contribution` doc](https://flower.ai/docs/first-time-contributors.html)
- [ ] Complete the Flower tutorial
- [x] Read the Flower Baselines docs to get an overview:
- [x] [How to use Flower Baselines](https://flower.ai/docs/baselines/how-to-use-baselines.html)
- [x] [How to contribute a Flower Baseline](https://flower.ai/docs/baselines/how-to-contribute-baselines.html)
### Prepare - understand the scope
- [X] Read the paper linked above
- [X] Decide which experiments you'd like to reproduce. The more the better!
- [X] Follow the steps outlined in [Add a new Flower Baseline](https://flower.ai/docs/baselines/how-to-contribute-baselines.html#add-a-new-flower-baseline).
- [X] You can use as reference [other baselines](https://github.com/adap/flower/tree/main/baselines) that the community merged following those steps.
### Verify your implementation
- [ ] Follow the steps indicated in the `EXTENDED_README.md` that was created in your baseline directory
- [ ] Ensure your code reproduces the results for the experiments you chose
- [ ] Ensure your `README.md` is ready to be run by someone that is no familiar with your code. Are all step-by-step instructions clear?
- [ ] Ensure running the formatting and typing tests for your baseline runs without errors.
- [ ] Clone your repo on a new directory, follow the guide on your own `README.md` and verify everything runs. | 0easy
|
Title: Read in README in `setup.py`
Body: Switch `setup.py` to read in the README as `long_description` so it appears on PyPI's page for the project. | 0easy
|
Title: Document differences with respx
Body: Document differences between this module and [respx](https://lundberg.github.io/respx/) | 0easy
|
Title: Review Cycle Duration within a Change Request metric API
Body: The canonical definition is here: https://chaoss.community/?p=3445 | 0easy
|
Title: CMO (Chande Momentum Oscillator) calculation
Body: Original CMO explanation from Chande's book:


**To Reproduce**
```python
df = ta.DataFrame({'close': [101.0313, 101.125, 101.9687, 102.7813, 103, 102.9687, 103.0625, 102.9375, 102.7188, 102.75, 102.9063, 102.9687 ]})
df['cmo'] = ta.cmo(close = ff['close'], length=10)
df
```
**Expected behavior**
This should generate THREE cmo values - for the last three rows of data, equivalent to what is in the book.
Instead, we get:
```sh
close | cmo
101.031 | NaN
101.125 | NaN
101.969 | NaN
102.781 | NaN
103.000 | NaN
102.969 | NaN
103.062 | NaN
102.938 | NaN
102.719 | NaN
102.750 | NaN
102.906 | 60.091
102.969 | 61.928
```
Remark: TA-LIB also mis-calculates CMO and returns different results from the book (and different results from Pandas-TA)... | 0easy
|
Title: Clarifications to 'Get Started!' section of documentation, based on errors encountered
Body: ## Clarifications to the `Get Started!` section of contributing.html
All: It took me a surprisingly long time to get the `pyjanitor` development environment set up on Windows 10. I have a couple of questions and an offer to add some clarifications
## Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Contributing.html](https://pyjanitor.readthedocs.io/contributing.html)
- What follows refers to the [Get Started!](https://pyjanitor.readthedocs.io/contributing.html#get-started) section
### 1. Running make on Windows
I got stuck while trying to run make. The documentation is helpful and instructed me to run:
$ conda install -c defaults -c conda-forge make
This worked fine. However, even though `make` was installed, I got the 'bash: make not found' error.
*Question:* How to follow up after conda installing make? Do I need to add it explicitly to $PATH? The documentation does not explain this part.
After a lot of searching, I found one fix that let me proceed. It had to do copying with MinGW,
which has a `make` in one of the bin directories. (I can add a note in docs to this for other Windows developers who will follow. Not sure how typical this is.)
Finally, got `make install` to work fine.
### 2. Running `make docs`
Next, I was hit with an error saying modules were not found. (But `make install` had worked fine.) I was following the instructions verbatim, but there was no mention of activating the conda environment.
Once I ran `source activate pyjanitor-dev` things worked fine when I *then* ran `make docs`
If my understanding is correct, source activate is needed prior to running `make docs`
Based on my experience, I'd like to make a few very minor modifications to the 'Getting Started' section of the documentation to make it easier for future developers of pyjanitor.
Please let me know if this is something that would be relevant. Also, I am not sure if my understanding of the steps is correct.
Or perhaps my experience was not typical, in which case we shouldn't modify the Getting Started instructions. | 0easy
|
Title: Docker compose
Body: I think including a Docker compose would improve the use of the boilerplate.
Use mine as reference: https://github.com/asacristani/fastapi-rocket-boilerplate/blob/main/docker-compose.yml | 0easy
|
Title: Timezones in fbprophet
Body: Due to this issue https://github.com/facebook/prophet/issues/831
topic 9 part 2 notebook needs to be changed a bit:
df = daily_df.reset_index()
df.columns = ['ds', 'y']
**df['ds'] = df['ds'].dt.tz_convert(None)**
| 0easy
|
Title: Try pygmentize-faster
Body: Xonsh is using [pygments](https://pygments.org/). Try [pygmentize-faster](https://github.com/joouha/pygmentize-faster) and report about results: is it made boot/using time of xonsh faster?
## For community
โฌ๏ธ **Please click the ๐ reaction instead of leaving a `+1` or ๐ comment**
| 0easy
|
Title: add brainIAK citation for SRM module
Body: From the BrainIAK readme:
```
Please cite BrainIAK in your publications as: "Brain Imaging Analysis Kit, http://brainiak.org." Additionally, if you use RRIDs to identify resources, please mention BrainIAK as "Brain Imaging Analysis Kit, RRID:SCR_014824". Finally, please cite the publications referenced in the documentation of the BrainIAK modules you use, e.g., SRM.
https://github.com/IntelPNI/brainiak
```
The most obvious place, I think, would be to paste in the reference here: http://hypertools.readthedocs.io/en/latest/hypertools.tools.align.html#hypertools.tools.align | 0easy
|
Title: Add a new parameter to the PCADecomposition visualizer in order to plot the feature columns in the projected space (biplot)
Body: ### Proposal
Add a new parameter to the PCADecomposition class in order to have the option to plot the input columns.
I would like to have something like a biplot.

I can start working on the code but I would like to know if you think its a good idea.
| 0easy
|
Title: [DOCS] confusion balancer is misaligned
Body: needs to be h2 instead of h1

| 0easy
|
Title: Set autocomplete to "off" on dcc.Input doesn't work
Body: Should be `autoComplete` instead.
https://github.com/plotly/dash-core-components/blob/62e90491c67a79edcfb0bdbdf2f5fa134efd791e/src/components/Input.react.js#L161
https://community.plot.ly/t/how-to-disable-suggestions-on-input-component/18711 | 0easy
|
Title: Add Quarter primitive
Body: - This primitive determine the quarter a datetime column falls into (1, 2, 3, 4) | 0easy
|
Title: [LineZoneAnnotator] - make the display of in/out values optional
Body: ### Description
Make the display of [`in_count`](https://github.com/roboflow/supervision/blob/3024ddca83ad837651e59d040e2a5ac5b2b4f00f/supervision/detection/line_counter.py#L53) and [`out_count`](https://github.com/roboflow/supervision/blob/3024ddca83ad837651e59d040e2a5ac5b2b4f00f/supervision/detection/line_counter.py#L54) values optional. This may require adding additional `display_in_count` and `display_out_count` arguments to [`LineZoneAnnotator`](https://github.com/roboflow/supervision/blob/3024ddca83ad837651e59d040e2a5ac5b2b4f00f/supervision/detection/line_counter.py#L165).
### API
```python
class LineZoneAnnotator:
def __init__(
self,
thickness: float = 2,
color: Color = Color.WHITE,
text_thickness: float = 2,
text_color: Color = Color.BLACK,
text_scale: float = 0.5,
text_offset: float = 1.5,
text_padding: int = 10,
custom_in_text: Optional[str] = None,
custom_out_text: Optional[str] = None,
display_in_count: bool = True,
display_out_count: bool = True,
):
# Existing initialization code...
self.display_in_count: bool = display_in_count
self.display_in_count: bool = display_in_count
def annotate(self, frame: np.ndarray, line_counter: LineZone) -> np.ndarray:
# Required logic changes...
```
### Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will definitely speed up the review process. Each change must be tested by the reviewer. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! ๐๐ป | 0easy
|
Title: Add or extend template to allow more control over window size
Body: At present, the ``plotly_app`` tag inserts an iframe that extends to 100% of the width of its parent, and sets its height as some (user controlled) relative fraction of the width.
This needs to be extended (either with more arguments for this tag, or with a different template tag) to allow more user control over the size.
| 0easy
|
Title: Switch to pip 19.0.3
Body: After releasing v0.8.0 we should check if pip 19.0.3 fixes the problem that we worked around with #574. | 0easy
|
Title: Review docs for lists and ensure they are bullets/numbers according to the information they present
Body: This is issue #965 on the internal repo. It's perfect for an external contributor and they don't need any major awareness of Vizro.
Docs change is needed where bullets are used in docs and present a sequence of actions -- this should be a numbered list.
Likewise, if a numbered list is used, it should be because the items need to be done in that particular sequence.
The task is to review each page of the core and vizro-ai docs and check where bullets are used to confirm they're correctly formatted. There may be some lists that we don't need (and it may sometimes make sense to add them, but that's an extension task).
Reference to -> https://github.com/mckinsey/vizro/pull/482#discussion_r1613205245 | 0easy
|
Title: Increase Clickable Area for `Checkboxes`
Body: The current clickable area for `Checkboxes` is too small, which can lead to difficulty in selecting checkboxes, especially on touch devices. Increasing the clickable area would enhance usability and accessibility. | 0easy
|
Title: [New feature] Add apply_to_images to Downscale
Body: | 0easy
|
Title: [ENH] Smarter auto color encodings
Body: Palette selection is currently manual:
```g.encode_point_color('my_col', ["color1", ...], , as_continuous=True)```
Some possible improvements:
1. Faster palette selection:
* named palettes, e.g., from color brewer
* if no palette is provided but continuous/categorical is known, pick a palette, and if data is available, try to right-size
2. Automate labeling of as_continuous/as_categorical
If data is available, or at least dtype:
* categorical dtype -> categorical
* strings, bools -> categorical
* numeric, time -> continuous
Ex:
```g.edges(events_df).encode_edge_color('time')``` => automatically cold-to-hot
Implementation wise, may be worthwhile to make lazy:
1. at time of `.encode_point_color()`: simple!
2. defer till `.plot()`: data guaranteed to be available!
3. new `auto` setting pushed to server: now client-independent: currently exception-raising calls will instead tag with `auto`, and server has freedom to handle during upload or pageload
I'm leaning towards 1. for now (simple), and as encodings refactor on the server, switch from narrow & strict semantics to 3's lazy. | 0easy
|
Title: Adding Binance / Other Exchanges
Body: An issue to collect data and tasks needed for adding binance
- [x] Decrease Callback/ Load Amount to 1 in #40
- [x] Analyze compatibility
Needed changes in Data call?
Needed changes in Data storage (Multiple times same pair e.g. "ETH-USD")
- [x] Discuss form of presentation
Original:Send all data to client, and hide/ show selection on clientside with js?
**Update**: Sending all data should be no problem due to changes in #40
Maybe rework show/ hide in the future, to show/ hide complete exchanges.
- [ ] Add Dropdown to select exchange
**Update** Show/ Hide Button is much easier
- [ ] define format for get_data function
Goal is to keep get_data modular for future.
- [ ] exclude api calls from get_data function
When we change this we should change #28 aswell
Thanks and Greets
Theimo | 0easy
|
Title: Bug: Custom Serialization/Deserialization logic in Redis doesn't work unless UTF-8 serializable
Body: **Describe the bug**
I am working on a middleware to add msgpack serialization to all messages (using ormsgpack which natively handles Pydantic models).
The issue seems to be that Faststream JSON-serializes messages along with header here:
```python
# faststream/redis/parser.py
class RawMessage:
# line: 85
@classmethod
def encode(
cls,
*,
message: Union[Sequence["SendableMessage"], "SendableMessage"],
reply_to: Optional[str],
headers: Optional["AnyDict"],
correlation_id: str,
) -> bytes:
msg = cls.build(
message=message,
reply_to=reply_to,
headers=headers,
correlation_id=correlation_id,
)
return dump_json(
{
"data": msg.data,
"headers": msg.headers,
}
)
```
Technically msg.data is supposed to be able to be bytes but in practice it has to be utf-8 compatible or it raises an exception:
```bash
/.../.venv/lib/python3.12/site-packa โ
โ ges/faststream/redis/publisher/producer.py:79 in publish โ
โ โ
โ 76 โ โ โ psub = self._connection.pubsub() โ
โ 77 โ โ โ await psub.subscribe(reply_to) โ
โ 78 โ โ โ
โ โฑ 79 โ โ msg = RawMessage.encode( โ
โ 80 โ โ โ message=message, โ
โ 81 โ โ โ reply_to=reply_to, โ
โ 82 โ โ โ headers=headers, โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ channel = None โ โ
โ โ correlation_id = '2ac5886a-736d-4712-b878-448b8b041f43' โ โ
โ โ headers = None โ โ
โ โ list = 'job-queue' โ โ
โ โ maxlen = None โ โ
โ โ message = b'\x82\xa3url\xb1http://kelly.com/\xa4type\xa3far' โ โ
โ โ psub = None โ โ
โ โ raise_timeout = False โ โ
โ โ reply_to = '' โ โ
โ โ rpc = False โ โ
โ โ rpc_timeout = 30.0 โ โ
โ โ self = <faststream.redis.publisher.producer.RedisFastProducer object at โ โ
โ โ 0x1137972f0> โ โ
โ โ stream = None โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ /.../.venv/lib/python3.12/site-packa โ
โ ges/faststream/redis/parser.py:101 in encode โ
โ โ
โ 98 โ โ โ correlation_id=correlation_id, โ
โ 99 โ โ ) โ
โ 100 โ โ โ
โ โฑ 101 โ โ return dump_json( โ
โ 102 โ โ โ { โ
โ 103 โ โ โ โ "data": msg.data, โ
โ 104 โ โ โ โ "headers": msg.headers, โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ correlation_id = '2ac5886a-736d-4712-b878-448b8b041f43' โ โ
โ โ headers = None โ โ
โ โ message = b'\x82\xa3url\xb1http://kelly.com/\xa4type\xa3far' โ โ
โ โ msg = <faststream.redis.parser.RawMessage object at 0x10e69c790> โ โ
โ โ reply_to = '' โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ /.../.venv/lib/python3.12/site-packa โ
โ ges/faststream/_compat.py:93 in dump_json โ
โ โ
โ 90 โ โ return to_jsonable_python(model, **kwargs) โ
โ 91 โ โ
โ 92 โ def dump_json(data: Any) -> bytes: โ
โ โฑ 93 โ โ return json_dumps(model_to_jsonable(data)) โ
โ 94 โ โ
โ 95 โ def get_model_fields(model: Type[BaseModel]) -> Dict[str, Any]: โ
โ 96 โ โ return model.model_fields โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ data = { โ โ
โ โ โ 'data': b'\x82\xa3url\xb1http://kelly.com/\xa4type\xa3far', โ โ
โ โ โ 'headers': { โ โ
โ โ โ โ 'correlation_id': '2ac5886a-736d-4712-b878-448b8b041f43' โ โ
โ โ โ } โ โ
โ โ } โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ /.../.venv/lib/python3.12/site-packa โ
โ ges/faststream/_compat.py:90 in model_to_jsonable โ
โ โ
โ 87 โ โ model: BaseModel, โ
โ 88 โ โ **kwargs: Any, โ
โ 89 โ ) -> Any: โ
โ โฑ 90 โ โ return to_jsonable_python(model, **kwargs) โ
โ 91 โ โ
โ 92 โ def dump_json(data: Any) -> bytes: โ
โ 93 โ โ return json_dumps(model_to_jsonable(data)) โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ kwargs = {} โ โ
โ โ model = { โ โ
โ โ โ 'data': b'\x82\xa3url\xb1http://kelly.com/\xa4type\xa3far', โ โ
โ โ โ 'headers': { โ โ
โ โ โ โ 'correlation_id': '2ac5886a-736d-4712-b878-448b8b041f43' โ โ
โ โ โ } โ โ
โ โ } โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x82 in position 0: invalid utf-8
```
Here's some test code:
```python
@app.after_startup
async def startup():
faker = Faker("en_US")
urls = [faker.url() for _ in range(10)]
for url in tqdm.tqdm(urls):
obj = Request(
url=url,
type=faker.word(),
)
await broker.publish(
ormsgpack.packb(obj, option=ormsgpack.OPT_SERIALIZE_PYDANTIC),
list="job-queue",
)
```
The error is occurring on the producer side and the only workaround I've found so far is doing a runtime monkey-patch of the RawMessage.encode method so do the msgpack serialization at the final messaging encoding phase. This complicates things on the parsing side though as it breaks the normal message parsing (i.e. in order to get message headers, correlation_id, etc)
Any suggestions? Perhaps there needs to be additional Middleware hooks for handling the final message serialization and initial message deserialization so that serialization methods that utilize non utf-8 compatible binary are supported?
The only other option I can think of is base64 encoding the binary before message serialization which would kind of defeat the space-saving purpose of using a binary format.
Related: https://github.com/airtai/faststream/issues/1255
| 0easy
|
Title: No documentation on how to export a circuit to OpenQASM output
Body: **Is your feature request related to a use case or problem? Please describe.**
https://quantumai.google/cirq/build/interop documents how to import from an OpenQASM input, but not for exporting. I never knew this has been possible until I read the [source code](https://github.com/qBraid/qBraid/blob/main/qbraid/transpiler/cirq_qasm/qasm_conversions.py) of qBraid.
**Describe the solution you'd like**
While the API used in qBraid works, I still have to:
- `circuit.all_operations()` as one of the arguments
- generate qubit order from `ops.QubitOrder`
Ideally if I could do `cirq.circuit_to_qasm(circuit)` (named to be consistent with `from cirq.contrib.qasm_import import circuit_from_qasm`), with no other required arguments.
**What is the urgency from your perspective for this issue? Is it blocking important work?**
Not a blocker because qBraid already does it. P3. | 0easy
|
Title: ๆ็ดๆฅไธ่ฝฝ่งฃๅไฝฟ็จ็๏ผไธ่ฝฝ็ๅพ็ๅจๅชไธชๆไปถๅคน๏ผๆพไธๅฐๅ๏ผ
Body: | 0easy
|
Title: cms/toolbar/toolbar_javascript.html shows python and django version
Body: When we are logged in as CMS users and have access to the cms toolbar, the "toolbar_javascript.html" template source code shows version information of Django, Python, etc. I think this could lead to security issues. A good solution would be to hide this data for production when settings.DEBUG=False

django-cms==3.9.0
| 0easy
|
Title: Raise test coverage above 90% for gtda/mapper/visualization.py
Body: Current test coverage from pytest is 10% | 0easy
|
Title: Improve template logging and cookbook
Body: While using alembic I came across two areas that I think could be improved:
- in the template `evn.py` before setting the log config we should check if it's not already setup.
I think that we can do it by checking if the root logger has an handler configured.
This is useful if alembic is used programmatically, since it avoids resetting the program logging config.
I guess that an option may comment may be enough for this, if the update in the template is too much
- Add a cookbook recipe on how to add a non nullable column without a server default value | 0easy
|
Title: Small refactoring: make _custom_command an instance method of CustomParser
Body: We have a `_custom_command` function: https://github.com/ploomber/ploomber/blob/2e0d764c8f914ba21480b275c545ea50ff800513/src/ploomber/cli/parsers.py#L399
that should really be an instance method of `CustomParser`: https://github.com/ploomber/ploomber/blob/2e0d764c8f914ba21480b275c545ea50ff800513/src/ploomber/cli/parsers.py#L36
Steps:
- [ ] make _custom_command an instance method
- [ ] refactor all tests that call _custom_command
- [ ] rename _custom_command to `load_from_entry_point_arg`
| 0easy
|
Title: Catch error when using `product` instead of `product[key]`
Body: (there is a related issue but I couldn't find it)
if a task generates multiple products, a user might inadvertently run something like `df.to_csv(product)` instead of `df.to_csv(product['data'])`, which will throw a cryptic error. In another issue, we discussed running some static analysis to determine if this might happen but now I thought of an easier alternative.
most of the time, users are encountering this issue when writing to files using `open()` or the `pandas.to_{something}` function. we could catch the error and see if it contains a specific substring, then append to the error message to point out the solution | 0easy
|
Title: Scrapy docs: 'make htmlview' does not work
Body: ### Description
`Path.resolve()` returns the `PosixPath` _class_ so it is not possible to concatenate it with a string.
```
webbrowser.open('file://' + Path('build/html/index.html').resolve())
```
### Steps to Reproduce
```
cd docs/
make html
make htmlview
```
**Expected behavior:**
`webbrowser` should open the file `build/html/index.html`
**Actual behavior:**
Get the following error when trying to open the file:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: can only concatenate str (not "PosixPath") to str
```
### Versions
`2.8.0`
### Possible solution
Instead of trying to concatenate the string `file://` we could use [`as_uri()`](https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.as_uri). For example:
```
htmlview: html
$(PYTHON) -c "import webbrowser; from pathlib import Path; \
webbrowser.open(Path('build/html/index.html').resolve().as_uri())"
```
| 0easy
|
Title: Replacing the obsolete applymap with map
Body: [DataFrame.applymap](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.applymap.html) has been deprecated.
I'd like to do a PR to fix this.
What does the community think?
Warning:
```
darts/tests/test_timeseries_multivariate.py::TestTimeSeries::test_map
/darts/darts/tests/test_timeseries.py:1449: FutureWarning: DataFrame.applymap has been deprecated. Use DataFrame.map instead.
df_0[["0"]] = df_0[["0"]].applymap(fn)
darts/tests/test_timeseries.py::TestTimeSeries::test_map
darts/tests/test_timeseries_multivariate.py::TestTimeSeries::test_map
/darts/darts/tests/test_timeseries.py:1450: FutureWarning: DataFrame.applymap has been deprecated. Use DataFrame.map instead.
df_2[["2"]] = df_2[["2"]].applymap(fn)
darts/tests/test_timeseries.py::TestTimeSeries::test_map
darts/tests/test_timeseries_multivariate.py::TestTimeSeries::test_map
/darts/darts/tests/test_timeseries.py:1451: FutureWarning: DataFrame.applymap has been deprecated. Use DataFrame.map instead.
df_01[["0", "1"]] = df_01[["0", "1"]].applymap(fn)
darts/tests/test_timeseries.py::TestTimeSeries::test_map
darts/tests/test_timeseries_multivariate.py::TestTimeSeries::test_map
/darts/darts/tests/test_timeseries.py:1452: FutureWarning: DataFrame.applymap has been deprecated. Use DataFrame.map instead.
df_012 = df_012.applymap(fn)
``` | 0easy
|
Title: Script to merge Flux diffuser into 1 safetensor file
Body: Hi everyone
I found Flux model which is split into 3 parts. So far I can't find any script to merge splitted parts of Flux model into one
Can anyone help me to do it

Thank in advance | 0easy
|
Title: getting valueerror when using strategy all
Body: Im using the github version with pip
**Describe the bug**
I am just getting a value error when trying to get strategyall()
```sh
Traceback (most recent call last):
File "C:\Users\copyp\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "C:\Users\copyp\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 48, in mapstar
return list(map(*args))
File "C:\Users\copyp\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas_ta\core.py", line 429, in _mp_worker
return getattr(self, method)(*args, **kwargs)
File "C:\Users\copyp\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas_ta\core.py", line 750, in ebsw
result = ebsw(close=close, length=length, bars=bars, offset=offset, **kwargs)
File "C:\Users\copyp\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas_ta\cycles\ebsw.py", line 55, in ebsw
ebsw = Series(result, index=close.index)
File "C:\Users\copyp\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\series.py", line 348, in __init__
raise ValueError(
ValueError: Length of passed values is 40, index implies 35.
```
The above exception was the direct cause of the following exception:
```sh
Traceback (most recent call last):
File "profile.py", line 675, in <module>
dat,ind,ret = get_data(tick,period)
File "profile.py", line 33, in get_data
df.ta.strategy()
File "C:\Users\copyp\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas_ta\core.py", line 704, in strategy
[self._post_process(r, **kwargs) for r in results]
File "C:\Users\copyp\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas_ta\core.py", line 704, in <listcomp>
[self._post_process(r, **kwargs) for r in results]
File "C:\Users\copyp\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 448, in <genexpr>
return (item for chunk in result for item in chunk)
File "C:\Users\copyp\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 868, in next
raise value
ValueError: Length of passed values is 40, index implies 35.
```
<br/>
```python
def get_data(ticker,period):
"""
Gets data from CSV (0 - AAPL, 1 - MSI, 2 - SBUX).
:return: T x 3 Stock Prices
"""
ticker = yf.Ticker(ticker)
df = ticker.history(period=period,interval='1d')
#df = df.dropna()
df.ta.strategy()
return df
```
i will provide more info if you need it, thanks for your time | 0easy
|
Title: A11y: basic accessibility improvements
Body: For screenreader users,
- [ ] add aria-labels to webapp buttons
- [ ] make progress bar at the top of page less irritating
- [ ] add headings to jump to main / form section | 0easy
|
Title: URL.render_as_string doesn't always give an URL that is parsed same way
Body: ### Describe the bug
So the code below fails because database is not escaped
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.1
### DBAPI (i.e. the database driver)
sqlite
### Database Vendor and Major Version
sqlite
### Python Version
3.11
### Operating system
linux
### To Reproduce
```python
import sqlalchemy
from sqlalchemy.engine import URL
first_url = URL.create('sqlite', username=None, password=None, host=None, port=None, database="a?b=c") # I could have this filename
first_url_as_string = first_url.render_as_string()
second_url = sqlalchemy.make_url(first_url_as_string)
assert first_url.database == second_url.database, f"Fail, expected: {first_url.database} found: {second_url.database}"
```
### Error
```
AssertionError: Fail, expected: a?b=c found: a
```
### Additional context
I didn't see any doc that claims that this should be roundtrip conversion, but it seems it's intended to be because there's escaping of other fields.
| 0easy
|
Title: `robot:skip` and `robot:exclude` tags do not support variables
Body: I try to use a variable in the second test so that the resulting tag is `robot:skip`:
```robotframework
*** Variables ***
${SKIP_FLAG} skip
*** Test Cases ***
Test One
Log Test One was executed.
VAR ${SKIP_FLAG} skip scope=suite
Test two
[Tags] robot:${SKIP_FLAG}
Log Test Two was executed.
```
My expectation was that I can conditionally skip other tests _completely_.
The tag gets set correctly, but the test still gets executed.
https://robotframework.org/code/?codeProject=N4IgdghgtgpiBcIDCBXAzgFwPZRAGhABMY0BjAJwEsAHDSrMBEfEAM0oBsSEBtUdrgDlocRABUSGAMopKGGADpyWAEZYMLUg3lgNiADpgAVCYAEANQhUIKrmlMmjhgCTApAaQCSABQD6AMQAZAEEAcQBfUyjTNABrGkNDR1MJTFMkCDQSBxNDVIxTAHkwGENo00CsAHMo-KKS0wB3TNMYAA8YUhR5QgUy6PNggCUo1w8fAJCIqLiaGa1qGABeNFl5RLA6jEasfqieMQgqtABdKOU1DHgxrz8gsPC9iurayRSdppb2zu6YXpBwicCMQqAA3P7eZQAK06egw5BQMAIF3U5hg5DQ9EYiAA7AoAIwElgojDBcjHBDAcLImAAR1k5BgsF0aF4J3CQA | 0easy
|
Title: Deprecate Python 3.7
Body: We have already deprecated Python 3.6 (#4295) and plan to remove support for it in RF 7.0 (#4294). Python 3.7 will reach its end-of-life in [June 2023](https://peps.python.org/pep-0537/) and it's clear RF 7.0 won't be released before that. In this situation it makes sense to drop also Python 3.7 support in RF 7.0 and thus we need to deprecate it now.
Similarly as with Python 3.6 deprecation, it doesn't make sense to emit deprecation warnings or make any other code changes. We only need to make sure info about supported versions in the README and in the UG is correct and mention this in RF 6.1 release notes. | 0easy
|
Title: [ENH] Add `class_weight` parameter to classifiers that use sklearn based estimators with testing
Body: ### Describe the feature or idea you want to propose
following #1776
### Describe your proposed solution
other classifiers need this update as well, like feature based for example
Also should add testing for this setup
### Describe alternatives you've considered, if relevant
we should make sure we cover all of them
### Additional context
_No response_ | 0easy
|
Title: Add IsMonthStart primitive
Body: - This primitive determine the `is_month_start` of a datetime column | 0easy
|
Title: ๆนๅไปท่ฎพ็ฝฎbug
Body: 
ๅฆๅพ๏ผๅฝๆฐ้็ญไบๅบ้ดๅณ็ซฏ็นๆถ๏ผไปทๆ ผไผ็ดๆฅๅๆๆไพฟๅฎ้ฃไธๆกฃ
ไปๆฌพๆถไนๆฏ่ฟไธชไปทๆ ผ | 0easy
|
Title: Modify `ForeignKeyWidget` to make it easier to override args
Body: This is a potential enhancement
If you want to load a related model using a mixture of fields in `ForeignKeyWidget`, you have to override the whole `clean()` method.
If there was a `get_lookup_kwargs(value, row=None, **kwargs)` method, then users would only need to override this method and not the whole `clean()` method.
e.g. see [this SO question](https://stackoverflow.com/q/74647054/39296)
| 0easy
|
Title: Clean up test_download_gzip_response
Body: The test is currently skipped due to some Python-2-only code. We should instead remove that code.
https://github.com/scrapy/scrapy/blob/42b3a3a23b328741f1720953235a62cba120ae7b/tests/test_downloader_handlers.py#L729-L754 | 0easy
|
Title: Good First Issue >> Price Volume Rank
Body: **Which version are you running?**
The latest.
**Is your feature request related to a problem? Please describe.**
No
**Describe the solution you'd like**
[Price Volume Rank](https://www.fmlabs.com/reference/default.htm)
**Additional context**
Good first issue. | 0easy
|
Title: Display distribution of each node
Body: The distribution plot on the right pane shows the distribution for all data points. It would be nice to show the distribution of points in each node.
One option for this could be to show a pie chart at each node, or during hover, change the histogram in the right hand pane.
| 0easy
|
Title: `autocomplete` on Forms
Body: We should support [`autocomplete`](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/autocomplete) on forms via `json_schema_extra`, e.g.:
```py
class LoginForm(BaseModel):
email: EmailStr = Field(title='Email Address', description="Enter whatever value you like")
password: SecretStr = Field(
title='Password',
description="Enter whatever value you like",
json_schema_extra={'autocomplete': 'current-password'}
)
``` | 0easy
|
Title: Deprecate `db_assignments` and `db_students`
Body: Now that there are better ways of adding assignments and students into the gradebook, I don't think it makes sense to keep these config values around anymore. | 0easy
|
Title: ENH: Clean up warnings
Body: There are many warnings in CI now, mainly because some functions are using deprecated interfaces, and we need to clean up these warnings. | 0easy
|
Title: [Feature request] Add apply_to_images to ShotNoise
Body: | 0easy
|
Title: RFE: Prefer TOML-native configuration over legaci INI
Body: ## What's the problem this feature will solve?
- I want old tox versions to know I need tox \>= 4.21
- I want to use TOML-native configuration
If I have this:
```toml
[tool.tox]
requires = ["tox>=4.21"]
env_list = ["3.13", "3.12", "type", "doc"]
...
```
The old tox does not read it and ignores it.
If I change it like this:
```toml
[tool.tox]
legacy_tox_ini = """
[tox]
min_version = 4.21
"""
env_list = ["3.13", "3.12", "type", "doc"]
...
```
The old tox reads it and updates tox in the provision venv.
However, the TOML-native configuration is ignored by the new tox.
## Describe the solution you'd like
I believe this could be solved if the new tox preferred the TOML-native configuration over legacy_tox_ini if both exist.
## Alternative Solutions
I could stick with legacy_tox_ini unless I know all environments are fully updated to tox > 4.21.
Thanks | 0easy
|
Title: [RFC] Link checker for documentation
Body: **Is your feature request related to a problem? Please describe.**
Often internal links can be misspelled or external links be down.
**Describe the solution you'd like**
Implement a link checker for the documentation. The fixing of the links could still be manual.
Also, we could try to incorporate it to `make all` and CI and see if it produces reliable output.
**Describe alternatives you've considered**
None.
**Additional context**
Mkdocs have a simple internal link [validation](https://www.mkdocs.org/user-guide/configuration/#validation), but maybe some third part can be used instead( or along with it) for external links. | 0easy
|
Title: FEAT: Implement groupby nth
Body: ### Is your feature request related to a problem? Please describe
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.nth.html
### Describe the solution you'd like
A clear and concise description of what you want to happen.
### Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
### Additional context
Add any other context or screenshots about the feature request here.
| 0easy
|
Title: How would I aggregate only one field?
Body: When I turn on the aggregation function, it turns my plot into a single point, which is not very helpful. Is there a way where I can see the average of field but leaving the other field as raw values -- this is especially helpful when I have categorical data. | 0easy
|
Title: RoBERTa on SuperGLUE's 'Multi-Sentence Reading Comprehension' task
Body: MultiRC is one of the tasks of the [SuperGLUE](https://super.gluebenchmark.com) benchmark. The task is to re-trace the steps of Facebook's RoBERTa paper (https://arxiv.org/pdf/1907.11692.pdf) and build an AllenNLP config that reads the MultiRC data and fine-tunes a model on it. We expect scores in the range of their entry on the [SuperGLUE leaderboard](https://super.gluebenchmark.com/leaderboard).
This can be formulated as a multiple choice task, using the [`TransformerMCTransformerToolkit `](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/mc/models/transformer_mc_tt.py#L14) model, analogous to the PIQA model. You can start with the [experiment config](https://github.com/allenai/allennlp-models/blob/main/training_config/tango/piqa.jsonnet) and [dataset reading step](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/mc/tango/piqa.py#L13) from PIQA, and adapt them to your needs. | 0easy
|
Title: docs/hardware/pasqal/getting_started.ipynb fails to execute in Google Colab
Body: **Description of the issue**
The cirq-pasqal getting_started.ipynb notebook calls [cirq.optimize_for_target_gateset](https://github.com/quantumlib/Cirq/blob/9c836ba3eef9f7d71a39b554632deb3463b1385b/docs/hardware/pasqal/getting_started.ipynb#L206) which should produce a circuit that validates w/r to [pasqal device](https://github.com/quantumlib/Cirq/blob/9c836ba3eef9f7d71a39b554632deb3463b1385b/docs/hardware/pasqal/getting_started.ipynb#L211), but this fails when run on https://colab.research.google.com/ notebook.
The same notebook passes if run on a local Debian Linux OS. This seems to be cause by different round-off errors of the CPUs, which lead to differing optimized circuits.
The root cause appears to be that `cirq.optimize_for_target_gateset` does not ensure that for PasqalGateset the new
circuit has single-operation moments, they just happen to be so for getting_started.ipynb with a favorable round-off errors.
The optimization may also produce a moment with 2 operations, which fails validation as follows:
**How to reproduce the issue**
Create and run fresh colab at https://colab.research.google.com
```python
# cell 1
!pip install "cirq_pasqal==1.4.1"
# cell 2
import cirq
import cirq_pasqal
from cirq_pasqal import TwoDQubit, PasqalVirtualDevice
p_qubits = TwoDQubit.square(6) # 6x6 square array of TwoDQubits
# Initialize and create a circuit
initial_circuit = cirq.Circuit()
initial_circuit.append(cirq.CZ(p_qubits[0], p_qubits[1]))
initial_circuit.append(cirq.Z(p_qubits[0]))
initial_circuit.append(cirq.CX(p_qubits[0], p_qubits[2]))
# Create a Pasqal device with a control radius of 2.1 (in units of the lattice spacing)
p_device = PasqalVirtualDevice(control_radius=2.1, qubits=p_qubits)
pasqal_gateset = cirq_pasqal.PasqalGateset(include_additional_controlled_ops=False)
print("INITIAL\n")
print(initial_circuit)
print()
pasqal_circuit = cirq.optimize_for_target_gateset(
initial_circuit, gateset=pasqal_gateset
)
print("OPTIMIZED\n")
print(pasqal_circuit)
print("\nVALIDATION\n")
p_device.validate_circuit(pasqal_circuit)
```
<details>
INITIAL
(0, 0): โโโ@โโโZโโโ@โโโ
โ โ
(1, 0): โโโ@โโโโโโโโผโโโ
โ
(2, 0): โโโโโโโโโโโXโโโ
OPTIMIZED
(0, 0): โโโ@โโโZโโโโโโโโโโโโโโโโโโ@โโโZโโโโโโโโโโโโโโโโโโโโโ
โ โ
(1, 0): โโโ@โโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
(2, 0): โโโโโโโโโโโPhX(0.5)^0.5โโโ@โโโPhX(-0.5)^0.5โโโZ^0โโโ
VALIDATION
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-4-aa6a2651e2d5>](https://localhost:8080/#) in <cell line: 28>()
26
27 print("\nVALIDATION\n")
---> 28 p_device.validate_circuit(pasqal_circuit)
2 frames
[/usr/local/lib/python3.10/dist-packages/cirq_pasqal/pasqal_device.py](https://localhost:8080/#) in validate_circuit(self, circuit)
137 ValueError: If the given circuit can't be run on this device
138 """
--> 139 super().validate_circuit(circuit)
140
141 # Measurements must be in the last non-empty moment
[/usr/local/lib/python3.10/dist-packages/cirq/devices/device.py](https://localhost:8080/#) in validate_circuit(self, circuit)
84 """
85 for moment in circuit:
---> 86 self.validate_moment(moment)
87
88 def validate_moment(self, moment: 'cirq.Moment') -> None:
[/usr/local/lib/python3.10/dist-packages/cirq_pasqal/pasqal_device.py](https://localhost:8080/#) in validate_moment(self, moment)
233 for operation in moment:
234 if not isinstance(operation.gate, cirq.MeasurementGate):
--> 235 raise ValueError("Cannot do simultaneous gates. Use cirq.InsertStrategy.NEW.")
236
237 def minimal_distance(self) -> float:
ValueError: Cannot do simultaneous gates. Use cirq.InsertStrategy.NEW.
</details>
**Cirq version**
1.4.1 | 0easy
|
Title: Method to pass common params to all the nodes in the graph
Body: **Is your feature request related to a problem? Please describe.**
There is no way to pass many parameters all at once to each node in a graph. For example, the `verbose` flag would be nice to have it already in all the nodes.
What I am doing now is defining the params inside the `AbstractGraph` (which is inherited in all the scraping graphs) with something like:

and then pass it in each node explicitily, for example inside `SmartScraperGraph`:

You may understand that this method doesn't scale up well.
**Describe the solution you'd like**
I would like a function like this inside `AbstractGraph` that updates all the `node_config` dict for each node:

**Describe alternatives you've considered**
None
**Additional context**
None
| 0easy
|
Title: Add support for pypy3
Body: The PyPy team has added support for Python 3 (see https://morepypy.blogspot.it/2017/03/pypy27-and-pypy35-v57-two-in-one-release.html).
We should add this to our test matrix:
- In the tox.ini file
- In the travis.yml test setup
- In the setup.py "classifiers / implementation" section
- In the README.rst
Implementation should be:
1. Add those changes
2. Make a pull-request
3. Check that all goes fine
4. Add yourself to the CREDITS file if this is your first contribution
5. Enjoy! | 0easy
|
Title: Honeypot2 in Python 3
Body: Bug in groupby wrangling (referrrer) | 0easy
|
Title: clean duplications of Xcom backend docs from core
Body: ### Body
Most of the text in [Object Storage XCom Backend](https://airflow.apache.org/docs/apache-airflow/2.10.5/core-concepts/xcoms.html#object-storage-xcom-backend) is duplicated with [Object Storage XCom Backend (common-io provider)](https://airflow.apache.org/docs/apache-airflow-providers-common-io/1.5.1/xcom_backend.html#object-storage-xcom-backend)
The task:
The core part should give high level explanation and forward to the provider docs for further read.
Similar to what we do with [Kubernetes](https://airflow.apache.org/docs/apache-airflow/2.10.5/administration-and-deployment/kubernetes.html#kubernetes) we give high level explanation and forward to additional read in helm chart or provider docs.
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | 0easy
|
Title: Add datasets metadata to documentation
Body: **Describe the issue**
Create documentation on metadata on example datasets such as number of samples, number of features, target/class.
For example, [sklearn.datasets.load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html).
<!-- If you have a question, note that you can email us via our listserve:
https://groups.google.com/forum/#!forum/yellowbrick -->
<!-- This line alerts the Yellowbrick maintainers, feel free to use this
@ address to alert us directly in follow up comments -->
@DistrictDataLabs/team-oz-maintainers
| 0easy
|
Title: Chat control height property should force widget height regardless of messages
Body: ### Description
Setting height when using the chat control should limit the chat window's height regardless of the messages' size. Currently, it overflows:

The chat window should have a fixed size defined by the height property:

Take https://demo-llm-chat.taipy.cloud/ as reference
Example code (just type long messages to reproduce):
```python
from taipy.gui import Gui, Icon
msgs = [
["1", "msg 1", "Fred"],
["2", "msg From Another unknown User", "Fredo"],
["3", "This from the sender User", "taipy"],
["4", "And from another known one", "Fredi"],
]
users = [
["Fred", Icon("/images/favicon.png", "Fred.png")],
["Fredi", Icon("/images/fred.png", "Fred.png")],
]
def button_action(state, var_name: str, payload: dict):
args = payload.get("args", [])
msgs.append([f"{len(msgs) +1 }", args[2], "Fred"])
state.msgs = msgs
Gui(
"""
<|{msgs}|chat|users={users}|on_action=button_action|height=300|sender_id=Me|>
""",
path_mapping={"images": "c:\\users\\jeu\\downloads"},
).run()
```
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Propagate any change on the demos and run all of them to ensure there is no breaking change.
- [ ] Ensure any change is well documented.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | 0easy
|
Title: Make OpenAI provider compatible with `openai>=1.66.0`
Body: ### Body
OpenAI provider Tests are failing with:
```
___ ERROR collecting providers/openai/tests/unit/openai/hooks/test_openai.py ___
ImportError while importing test module '/opt/airflow/providers/openai/tests/unit/openai/hooks/test_openai.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/local/lib/python3.9/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
providers/openai/tests/unit/openai/hooks/test_openai.py:25: in <module>
from openai.types.beta import (
E ImportError: cannot import name 'VectorStore' from 'openai.types.beta' (/usr/local/lib/python3.9/site-packages/openai/types/beta/__init__.py
```
This is due to https://github.com/openai/openai-python/pull/2175
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | 0easy
|
Title: Switch internal icons to FontAwesome6
Body: ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
_No response_
### Describe the solution you would like.
_No response_
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: [FEATURE]: Stale bot automation
Body: ### Feature summary
Stale bot for cleaning up stale issues
### Feature description
using GitHub actions, having a stale bot for cleaning up stale issue.
https://github.com/actions/stale
### Motivation
_No response_
### Alternatives considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: [ENH] make api version inspectable
Body: currently must:
```
from graphistry.pygraphistry import PyGraphistry
PyGraphistry.api_version()
```
which is not discoverable | 0easy
|
Title: [Feature request] Add apply_to_images to Illumination
Body: | 0easy
|
Title: [benchmark] Add an option to generate random images for benchmarks
Body: | 0easy
|
Title: Add new language features from 3.5 to cfg
Body: | 0easy
|
Title: can i merging two indictors
Body: If I wrote help(ta.inductor)
It will print calculations of inductors so can I merge tow indicators to become one indictor ? | 0easy
|
Title: [BUG] Input validations for APIs missing
Body: ## Describe the bug
when updating anomaly_params for an Kpi if anomaly_params passed is null, it causes HTTP 500 response
## Explain the environment
- **Chaos Genius version**: https://github.com/chaos-genius/chaos_genius/commit/9e2ac69f06a5ed8e17bd173c18f24e2769a81c3c reproduced on test builds from Develop branch
## Current behavior
HTTP 500
## Expected behavior
should cause validation error with HTTP 4xx
## Logs
```
{"asctime": "2022-03-11 11:46:27,599", "levelname": "ERROR", "name": "chaos_genius", "message": "Exception on /api/anomaly-data/16/anomaly-params [POST]", "lineno": 1440, "funcName": "log_exception", "filename": "app.py", "exc_info": "Traceback (most recent call last):\n File \"/usr/local/lib/python3.8/dist-packages/flask/app.py\", line 2051, in wsgi_app\n response = self.full_dispatch_request()\n File \"/usr/local/lib/python3.8/dist-packages/flask/app.py\", line 1501, in full_dispatch_request\n rv = self.handle_user_exception(e)\n File \"/usr/local/lib/python3.8/dist-packages/flask_cors/extension.py\", line 165, in wrapped_function\n return cors_after_request(app.make_response(f(*args, **kwargs)))\n File \"/usr/local/lib/python3.8/dist-packages/flask/app.py\", line 1499, in full_dispatch_request\n rv = self.dispatch_request()\n File \"/usr/local/lib/python3.8/dist-packages/flask/app.py\", line 1485, in dispatch_request\n return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)\n File \"/usr/src/app/chaos_genius/views/anomaly_data_view.py\", line 295, in kpi_anomaly_params\n err, new_anomaly_params = validate_partial_anomaly_params(\n File \"/usr/src/app/chaos_genius/views/anomaly_data_view.py\", line 736, in validate_partial_anomaly_params\n if fields.isdisjoint(set(anomaly_params.keys())):\nAttributeError: 'NoneType' object has no attribute 'keys'"}
``` | 0easy
|
Title: Airflow not retrying when capacity not available at AWS ECS hook
Body: ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==9.2.0
### Apache Airflow version
2.10.5
### Operating System
docker image apache/airflow:2.10.5
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
Sometimes when using fargate spot AWS fails to start a task due to lack of available resources (see aws reponse below). Airflow is not retrying to launch the task when that is the case.
```
"responseElements": {
"failures": [
{
"reason": "Capacity is unavailable at this time. Please try again later or in a different availability zone"
}
],
"tasks": []
}
```
### What you think should happen instead
I would expect Airflow to retry automatically, as lack of capcity is usually a temporal issue and just a few seconds delay may be enough to be able to start the task. It is already doing it for several other erros as seen in [https://github.com/apache/airflow/blob/b93c3db6b1641b0840bd15ac7d05bc58ff2cccbf/airflow/providers/amazon/aws/hooks/ecs.py#L31](https://github.com/apache/airflow/blob/b93c3db6b1641b0840bd15ac7d05bc58ff2cccbf/airflow/providers/amazon/aws/hooks/ecs.py#L31)
### How to reproduce
It depends on the load of the specific availability zones, so it is not possible to do with a 100% accuracy, but we are seeing it on a daily basis on our scheduled tasks in **eu-west-1**.
### Anything else
We are more than happy to make or test a fix, but it is our first time looking under the hood of airflow, so a bit of guidance will be needed
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| 0easy
|
Title: BUG: multi-indexed dataframes' shape
Body: ### Describe the bug
The shape of a multi-indexed dataframe is incorrect.
### To Reproduce
In Xorbits,
```python
df = pd.DataFrame({"foo": (1, 2, 3), "bar": (4, 5, 6)})
c = df.rolling(window=2).agg('corr')
c.shape
```
you got `(3, 2)`.
But in pandas, you got `(6, 2)`.
| 0easy
|
Title: Use JULIA_LOAD_PATH for julia env handling
Body: https://github.com/jupyter/repo2docker/pull/595 writes a jupyter kernel that automatically loads the `Project.toml` env that is in the root. This works, but it also means that any julia shell for example does not have the environment loaded by default, because only this particular Jupyter kernel automatically loads the env.
It might be a better strategy to instead add the root folder of the repo to the `JULIA_LOAD_PATH` env variable. I believe that should make everything to all julia processes available, not just Jupyter kernels.
I would suggest we first ship the current implementation in the PR (which is not incorrect), and then investigate this option here. | 0easy
|
Title: [BUG] Generated cURL command misses the `Content-Type` header
Body: Currently, it is explicitly removed. For Python, these headers are added automatically by `requests`, so for cURL they should not be removed. | 0easy
|
Title: CSV adapter fails when there's only 1 row of data
Body: Given a CSV file with a single row of data:
```bash
% cat test.csv
"a","b"
1,2
```
Shillelagh will fail:
```bash
% shillelagh
sql> SELECT * FROM "csv://test.csv";
Traceback (most recent call last):
File "/Users/beto/Projects/shillelagh/src/shillelagh/backends/apsw/db.py", line 191, in execute
self._cursor.execute(operation, parameters)
apsw.SQLError: SQLError: no such table: csv://test.csv
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/beto/Projects/shillelagh/venv/bin/shillelagh", line 33, in <module>
sys.exit(load_entry_point('shillelagh', 'console_scripts', 'shillelagh')())
File "/Users/beto/Projects/shillelagh/src/shillelagh/console.py", line 210, in main
cursor.execute(sql)
File "/Users/beto/Projects/shillelagh/src/shillelagh/backends/apsw/db.py", line 66, in wrapper
return method(self, *args, **kwargs)
File "/Users/beto/Projects/shillelagh/src/shillelagh/backends/apsw/db.py", line 202, in execute
self._create_table(uri)
File "/Users/beto/Projects/shillelagh/src/shillelagh/backends/apsw/db.py", line 251, in _create_table
self._cursor.execute(
File "/Users/beto/Projects/shillelagh/src/shillelagh/backends/apsw/vt.py", line 205, in Create
adapter = self.adapter(*deserialized_args)
File "/Users/beto/Projects/shillelagh/src/shillelagh/adapters/file/csvfile.py", line 124, in __init__
self.columns = {
File "/Users/beto/Projects/shillelagh/src/shillelagh/adapters/file/csvfile.py", line 127, in <dictcomp>
order=order[column_name],
KeyError: 'a'
``` | 0easy
|
Title: Feature Request ใplz support InternLM2.5ใ
Body: Hi,
I noticed that the repository currently lacks support for the InternLM2.5-7B (1.8B, 20B) model, which may cause compatibility issues or missing steps for users trying to implement it. It would be beneficial to update the repository with detailed instructions or tools for integrating and using the InternLM2.5-7B model, ensuring the content remains relevant.
I believe that adding this support would significantly enhance the usability of the project. While some manual adjustments are possible, official guidance or toolchain support would be much more efficient and advantageous, especially for new users.
If possible, including example scripts or demonstrating integration with InternLM2.5-7B would also be a valuable addition.
For further support, please add the InternLM Assistant (WeChat search: InternLM) or join the InternLM Discord(https://discord.com/invite/xa29JuW87d). | 0easy
|
Title: [DOC] `</li>` tags appear on the end on list in partition clustering notebook
Body: ### Describe the issue linked to the documentation
Some `</li>` tags appear on the partition clustering notebook :

### Suggest a potential alternative/fix
Find a way to remove them, but i'm not sure why they appear in the first place. | 0easy
|
Title: Leading and internal spaces should be preserved in documentation
Body: Hi,
I found that `.. code::robotframework` directives cannot work well in resource file format when I set documen format as reStructuredText format.
```robot
*** Keywords ***
Simple keyword
[Documentation]
... .. code:: robotframework
...
... *** Test Cases ***
... Simple test case
... Simple keyword
```
docutils ouput ` <string>:1: (ERROR/3) Content block expected for the "code" directive; none found.` error message.
I had tried to find out why it happen and found that libdoc visit each token and lost white space token.
After parse_resource_file function invoked, the doc of keyword remain like the following.
This is not a valid format for docutils.
```
.. code:: robotframework
*** Test Cases ***
Simple test case
Simple keyword
```
May it possible to reserve white space when using reStructuredText as document format? | 0easy
|
Title: [QOL] Add workflow for auto-release
Body: Add a workflow for auto-release (bumping versions in `pyproject.toml`, bumping it to dockerhub/githubcr, and any other version references that need to be updated.
Currently every release will push the latest version to 3 places:
- [Pypi](https://pypi.org/project/GraphQLer/)
- [Dockerhub](https://hub.docker.com/r/omar2535/graphqler)
- [Github Container Registry](https://github.com/omar2535/GraphQLer/pkgs/container/graphqler)
## Possible idea
Could be done through specific commits as a github action? IE. Pull-request starting with `[RELEASE]` | 0easy
|
Title: mutlilabel task
Body: i want to know whether autosklearn support multilabel task, thank you! | 0easy
|
Title: Docs: bad links in flask migration docs
Body: ### Summary
https://docs.litestar.dev/latest/migration/flask.html
<img width="936" alt="image" src="https://github.com/litestar-org/litestar/assets/45884264/e0b452be-8b6e-4b0f-96e1-6c66a81322a3"> | 0easy
|
Title: [Docs] Need more examples on Tabular Data
Body: The Tabular dataset example only gives a few use cases that cleanlab can be used for. However, the community is looking for other usecases may be related to eCommerce, Fintech related on Tabular data.
1. https://docs.cleanlab.ai/stable/tutorials/tabular.html
2. https://docs.cleanlab.ai/stable/tutorials/datalab/tabular.html | 0easy
|
Title: Redirects in Django CMS 4 (missing the "This page has no preview!"-message)
Body: <!--
Please fill in each section below, otherwise, your issue will be closed.
This info allows django CMS maintainers to diagnose (and fix!) your issue
as quickly as possible.
-->
## Description
In Django CMS 3 when you configured a page as a redirect to another page (Page > Advanced Settings) the page showed a message _"This page has no preview! It is being redirected to: <path>"_ whenever you opened it in the frontend while being logged in.
In Django CMS 4 this message is no longer available. When you open a redirecting-page it shows the (most likely empty) page if you are in the editing mode or it redirects to the configured target page if you are in preview mode. So while being in the editing mode, you don't get any indication that a redirect has been configured for the current page which seems a source for confusion for CMS editors.
## Suggested behavior
My suggestion is to reintroduce a similar behavior as in Django CMS 3. The message _"This page has no preview! (...)"_ should be displayed in both editing- **and** preview-mode and by that making it easier to edit the redirecting page.
## Do you want to help fix this issue?
<!--
The django CMS project is managed and kept alive by its open source community and is backed by the [django CMS Association](https://www.django-cms.org/en/about-us/). We therefore welcome any help and are grateful if people contribute to the project. Please use 'x' to check the items below.
-->
* [x] Yes, I want to help fix this issue by testing a new version before its release.
* [ ] No, I only want to report the issue.
| 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.